Configuring LATEX for Xepersian and TexStudio
You may sometimes get bored with MS Word of (rarely) LibreOffice Writer. There is another document editor which is more likely to programming! LATEX!
In this tutorial I will show how to install LATEX in windows with TexStudio editor, step by step with pictures.
How to turn your TV to Media Center
You may be like to collect and watch movies, TV series, or music archives. Then you have to store them in your storage ([External] Hard drives, NAS, …) and use some software to manage them.
However the scenario sounds good but there are other great ways to turn your home network to a full capable media center. Read more to learn how to configure and change your home network to media center.
Reversing Java: Part III
In the previous tutorials, I’ve described the simple Java byte code structure and stated to reverse a simple Hello World Java application. In this tutorial, I will describe the remaining parts of the class file.
Support #Urmia Lake
Reversing Java: Part II
In the Reversing Java: Part I, I’ve described the main structure of Java class file bytes. In this part, I’ll continue decompiling the HelloWorld example.
Reversing Java: Part I
Recently I’ve interested in byte code structure of Java and Dalvik. I’ve found some useful tools for playing with them.
Destination Byte Code
Java byte codes are simple to reverse engineering because they compile in run time. i.e. JVM will execute the byte codes in run time, thus Java code is cross platform but executes with more delay than direct compiled machine codes (for example using C++ and gcc).
Mathematical LDA
In the previous post (Latent Dirichlet Allocation), I’ve described the LDA process and how it can be applied on documents.
In this post I will explain how the probabilities can be estimated using collapsed Gibbs sampling.
Lets start with the LDA Probabilistic Graph Model.
Where W is the sampled word from document, Z is the topic assigned by Document (d), θ is the Dirichlet distribution of d, α and β are the input of Dirichlets. More info about hyperparameters can be found this link.
So the only known variables are α, β, and w. All others (z, θ, and φ) are unknown. So based on the LDA graph we have:
p(w, z, θ, φ | α, β) = p(φ|β) p(θ|α) p(z|θ) p(w|φz)
The right side of the above conditional probability can be reached by the probabilistic graph model where each variable only depends on its parent nodes.
Social Share Activated
Latent Dirichlet Allocation
What is Latent Dirichlet Allocation?
In a general view, LDA is an unsupervised method for clustering documents. It models (purified) documents as bag of words. Also it assumes each word (and document) has a mixture model of topics i.e. each word (and document) may belongs to each of the topics by a probability. It takes number of clusters in the corpus as input then, simply assigns each word in each document a random topic. Then tries for
It was a very general description of LDA.