Tuesday, after hacking into Lolita City, Anonymous exposed a large ring of Internet pedophiles.
Lolita City, a darknet website used by pedophiles to trade in child pornography, hosted something like 1,589 pedophiles trading in kiddie porn. A darknet website is a closed private network of computers used for file sharing, sometimes referred to as a hidden wiki. Darknet websites are part of the Invisible Web which is not indexed by standard search engines.
In Tuesday’s Pastebin release, Anonymous explained the technical side of how they were able to locate and identify Lolita City and access their user data base. In a prior Pastebin release, Anonymous offered a timeline of events detailing the discovery of the hidden cache of more than 100 gigabytes of child porn associated with Lolita City.
The following is a statement concerning Operation Darknet, and a list of demands released by Anonymous:
The owners and operators at Freedom Hosting are openly supporting child pornography and enabling pedophiles to view innocent children, fueling their issues and putting children at risk of abduction, molestation, rape, and death.
For this, Freedom Hosting has been declared #OpDarknet Enemy Number One.
By taking down Freedom Hosting, we are eliminating 40+ child pornography websites, among these is Lolita City, one of the largest child pornography websites to date containing more than 100GB of child pornography.
We will continue to not only crash Freedom Hosting's server, but any other server we find to contain, promote, or support child pornography.
Our demands are simple. Remove all child pornography content from your servers. Refuse to provide hosting services to any website dealing with child pornography. This statement is not just aimed at Freedom Hosting, but everyone on the internet. It does not matter who you are, if we find you to be hosting, promoting, or supporting child pornography, you will become a target.
The Invisible Web
The Invisble Web (also called Deepnet, the Deep Web, DarkNet, Undernet, or the hidden Web) refers to World Wide Web content that is not part of the Surface Web, which is indexed by standard search engines. Some have said that searching on the Internet today can be compared to dragging a net across the surface of the ocean: a great deal may be caught in the net, but there is a wealth of information that is deep and therefore missed.
Most of the Web's information is buried far down on dynamically generated sites, and standard search engines do not find it. Traditional search engines cannot find or retrieve content in the deep Web – those pages do not exist until they are created dynamically as the result of a specific search. The deep Web is several orders of magnitude larger than the surface Web.
To discover content on the Web, search engines use web crawlers that follow hyperlinks. This technique is ideal for discovering resources on the surface Web but is often ineffective at finding Invisible Web resources. For example, these crawlers do not attempt to find dynamic pages that are the result of database queries due to the infinite number of queries that are possible.