TOR Spy Services

Knowing all possible web paths in the world is the initial step for making a search engine (SE). By means of  SE one can analyze the web for the material he/she likes. In normal Domain Name System, each TLD provider (Top Level Domain) can sell or release list of all its domains. As an example .com TLD can sell or release all the domains which are end with “.com“. But the problem is more complicated in TOR (or other hidden service providers). In this post I will talk about my tool named Onion Harvester and how to find initial points for finding hidden services to be crawled.

TOR Network

TOR Network

Continue reading Onion Harvester: First step to TOR Search Engines

Read more

Introduction to Cryptography and PGP 1

I have a workshop titled “Introduction to Cryptography and PGP” at Urmia University of Technology. I talked about basic concepts of signature and encryption using asymmetric cryptography. Then I talked about PGP which is one of the main usages of this system. I followed the workshop with configuring Thunderbird and GPG4win for applying PGP in Email system. You may find the contents in my previous post about GPG in Thunderbird and the way PGP works.

The pictures of this session is here.

Continue reading “Introduction to Cryptography and PGP” Workshop

Read more

You may have visit SHODAN (The IoT search engine) or ZoomEye (World Wide Port Search Engine). The systems are very useful to have a good view of world! 🙂

But they have restriction for seeing results. For example SHODAN lets for unregistered users to view just 1 page (10 results). If you registered, the limitation is 5 pages. But again restriction!

So what to do?

Continue reading IVRE! Drunk Frenchman Port Scanner Framework!

Read more

In this tutorial I want to write about using Apache Spark on Ubuntu machines where you can develop big data analysis apps with it.

First of all, I want to write a small and quick introduction to Hadoop + Spark environment. Hadoop makes it possible to work with lots of computers in a cluster. Work can be: storing files in cluster (HDFS – Hadoop Distributed File System), storing database in cluster (Apache HBase), or run software in cluster (MapReduce, Spark).

Continue reading Apache + Yarn + Spark: Play with Twitter data!

Read more
LinkedIn Auto Publish Powered By : XYZScripts.com