Skip to content

Surface Web, Deep Web, And Dark Web Explained

Blog
  • 1631810200206-human1.jpg.webp
    When you think about the internet, what’s the first thing that comes to mind ? Online shopping ? Gaming ? Gambling sites ? Social media ? Each one of these would certainly fall into the category of what requires internet access to make possible, and it would be almost impossible to imagine a life without the web as we know it today. However, how well do we really know the internet and its underlying components ?

    Let’s first understand the origins of the Internet

    The “internet” as we know it today in fact began life as a product called ARPANET. The first workable version came in the late 1960s and used the acronym above rather than the less friendly “Advanced Research Projects Agency Network”. The product was initially funded by the U.S. Department of Defense, and used early forms of packet switching to allow multiple computers to communicate on a single network - known today as a LAN (Local Area Network).

    The internet itself isn’t one machine or server. It’s an enormous collection of networking components such as switches, routers, servers and much more located all over the world - all contacted using common “protocols” (a method of transport which data requires to reach other connected entities) such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). Both TCP and UDP use a principle of “ports” to create connections, and ultimately each connected device requires an internet address (known as an IP address which is unique to each device meaning it can be identified individually amongst millions of other inter connected devices).

    In 1983, ARPANET began to leverage the newly available TCP/IP protocol which enabled scientists and engineers to assemble a “network of networks” that would begin to lay the foundation in terms of the required framework or “web” for the internet as we know it today to operate on. The icing on the cake came in 1990 when Tim Berners-Lee created the World Wide Web (www as we affectionately know it) - effectively allowing websites and hyperlinks to work together to form the internet we know and use daily.

    However, over time, the internet model changed as a result of various sites wishing to remain outside of the reach of search engines such as Google, Bing, Yahoo, and the like. This method also gave content owners a mechanism to charge users for access to content - referred to today as a “Paywall”. Out of this new model came, effectively, three layers of the internet.

    Three “Internets” ?

    To make this easier to understand (hopefully), I’ve put together the below diagram
    1626271657-557191-interneticeberg.webp

    The “Surface Web”

    Ok - with the history lesson out of the way, we’ll get back to the underlying purpose of this article, which is to reveal the three “layers” of the internet. For a simple paradigm, the easiest way to explain this is to use the “Iceberg Model”.

    The “internet” that forms part of our everyday lives consists of sites such as Google, Bing, Yahoo (to a lesser extent) and Wikipedia (as common examples - there are thousands more).

    The “Deep Web”

    The next layer is known as the “Deep Web” which typically consists of sites that do not expose themselves to search engines, meaning they cannot be “crawled” and will not feature in Google searches (in the sense that you cannot access a direct link without first having to login). Sites covered in this category - those such as Netflix, your Amazon or eBay account, PayPal, Google Drive, LinkedIn (essentially, anything that requires a login for you to gain access)

    The “Dark Web”

    The third layer down is known as the “Dark Web” - and it’s “Dark” for a reason. These are sites that truly live underground and out of reach for most standard internet users. Typically, access is gained via a TOR (The Onion Router - a bit more about that later) enabled browser, with links to websites being made up of completely random characters (and changing often to avoid detection), with the suffix of .onion. If I were asked to describe the Dark Web, I’d describe it as an underground online marketplace where literally anything goes - and I literally mean “anything”.

    Such examples are

    • Ransomware
    • Botnets,
    • Biitcoin trading
    • Hacker services and forums
    • Financial fraud
    • Illegal pornography
    • Terrorism
    • Anonymous journalism
    • Drug cartels (including online marketplaces for sale and distribution - a good example of this is Silk Road and Silk Road II)
    • Whistleblowing sites
    • Information leakage sites (a bit like Wikileaks, but often containing information that even that site cannot obtain and make freely available)
    • Murder for hire (hitmen etc.)

    Takeaway

    The Surface, Dark, and Deep Web are in fact interconnected. The purpose of these classifications is to determine where certain activities that take place on the internet fit. While internet activity on the Surface Web is, for the most part, secure, those activities in the Deep Web are hidden from view, but not necessarily harmful by nature. It’s very different in the case of the Dark Web. Thanks to it’s (virtually) anonymous nature little is known about the true content. Various attempts have been made to try and “map” the Dark Web, but given that URLs change frequently, and generally, there is no trail of breadcrumbs leading to the surface, it’s almost impossible to do so.
    In summary, the Surface Web is where search engine crawlers go to fetch useful information. By direct contrast, the Dark Web plays host to an entire range of nefarious activity, and is best avoided for security concerns alone.

  • @phenomlab some months ago I remember that I’ve take a look to the dark web……and I do not want to see it anymore….

    It’s really….dark, the content that I’ve seen scared me a lot……

  • @justoverclock yes, completely understand that. It’s a haven for criminal gangs and literally everything is on the table. Drugs, weapons, money laundering, cyber attacks for rent, and even murder for hire.

    Nothing it seems is off limits. The dark web is truly a place where the only limitation is the amount you are prepared to spend.


  • 5 Votes
    4 Posts
    206 Views

    @DownPW here. Hostrisk is automated and doesn’t accept registrations.

  • Blog Setup

    Solved Customisation
    17
    8 Votes
    17 Posts
    811 Views

    Here is an update. So one of the problems is that I was coding on windows - duh right? Windows was changing one of the forward slashes into a backslash when it got to the files folder where the image was being held. So I then booted up my virtualbox instance of ubuntu server and set it up on there. And will wonders never cease - it worked. The other thing was is that there are more than one spot to grab the templates. I was grabbing the template from the widget when I should have been grabbing it from the other templates folder and grabbing the code from the actual theme for the plugin. If any of that makes sense.

    I was able to set it up so it will go to mydomain/blog and I don’t have to forward it to the user/username/blog. Now I am in the process of styling it to the way I want it to look. I wish that there was a way to use a new version of bootstrap. There are so many more new options. I suppose I could install the newer version or add the cdn in the header, but I don’t want it to cause conflicts. Bootstrap 3 is a little lacking. I believe that v2 of nodebb uses a new version of bootstrap or they have made it so you can use any framework that you want for styling. I would have to double check though.

    Thanks for your help @phenomlab! I really appreciate it. I am sure I will have more questions so never fear I won’t be going away . . . ever, hahaha.

    Thanks again!

  • Nodebb as blogging platform

    General
    10
    5 Votes
    10 Posts
    566 Views

    @qwinter I’ve extensive experience with Ghost, so let me know if you need any help.

  • 1 Votes
    2 Posts
    263 Views

    @mike-jones Hi Mike,

    There are multiple answers to this, so I’m going to provide some of the most important ones here

    JS is a client side library, so you shouldn’t rely on it solely for validation. Any values collected by JS will need to be passed back to the PHP backend for processing, and will need to be fully sanitised first to ensure that your database is not exposed to SQL injection. In order to pass back those values into PHP, you’ll need to use something like

    <script> var myvalue = $('#id').val(); $(document).ready(function() { $.ajax({ type: "POST", url: "https://myserver/myfile.php?id=" + myvalue, success: function() { $("#targetdiv").load('myfile.php?id=myvalue #targetdiv', function() {}); }, //error: ajaxError }); return false; }); </script>

    Then collect that with PHP via a POST / GET request such as

    <?php $myvalue= $_GET['id']; echo "The value is " . $myvalue; ?>

    Of course, the above is a basic example, but is fully functional. Here, the risk level is low in the sense that you are not attempting to manipulate data, but simply request it. However, this in itself would still be vulnerable to SQL injection attack if the request is not sent as OOP (Object Orientated Programming). Here’s an example of how to get the data safely

    <?php function getid($theid) { global $db; $stmt = $db->prepare("SELECT *FROM data where id = ?"); $stmt->execute([$theid]); while ($result= $stmt->fetch(PDO::FETCH_ASSOC)){ $name = $result['name']; $address = $result['address']; $zip = $result['zip']; } return array( 'name' => $name, 'address' => $address, 'zip' => $zip ); } ?>

    Essentially, using the OOP method, we send placeholders rather than actual values. The job of the function is to check the request and automatically sanitise it to ensure we only return what is being asked for, and nothing else. This prevents typical injections such as “AND 1=1” which of course would land up returning everything which isn’t what you want at all for security reasons.

    When calling the function, you’d simply use

    <?php echo getid($myvalue); ?>

    @mike-jones said in Securing javascript -> PHP mysql calls on Website:

    i am pretty sure the user could just use the path to the php file and just type a web address into the search bar

    This is correct, although with no parameters, no data would be returned. You can actually prevent the PHP script from being called directly using something like

    <?php if(!defined('MyConst')) { die('Direct access not permitted'); } ?>

    then on the pages that you need to include it

    <?php define('MyConst', TRUE); ?>

    Obviously, access requests coming directly are not going via your chosen route, therefore, the connection will die because MyConst does not equal TRUE

    @mike-jones said in Securing javascript -> PHP mysql calls on Website:

    Would it be enough to just check if the number are a number 1-100 and if the drop down is one of the 5 specific words and then just not run the rest of the code if it doesn’t fit one of those perameters?

    In my view, no, as this will expose the PHP file to SQL injection attack without any server side checking.

    Hope this is of some use to start with. Happy to elaborate if you’d like.

  • 5 Votes
    4 Posts
    517 Views

    @crazycells I guess the worst part for me was the trolling - made so much worse by the fact that the moderators allowed it to continue, insisting that the PeerLyst coming was seeing an example by allowing the community to “self moderate” - such a statement being completely ridiculous, and it wasn’t until someone else other than myself pointed out that all of this toxic activity could in fact be crawled by Google, that they decided to step in and start deleting posts.

    In fact, it reached a boiling point where the CEO herself had to step in and post an article stating their justification for “self moderation” which simply doesn’t work.

    The evidence here speaks for itself.

  • 0 Votes
    1 Posts
    196 Views
    No one has replied
  • 104 Votes
    146 Posts
    11k Views

    https://www.techerati.com/news-hub/uk-competition-regulator-concerned-about-ai-foundation-models/

  • is my DMARC configured correctly?

    Solved Configure
    3
    3 Votes
    3 Posts
    323 Views

    @phenomlab said in is my DMARC configured correctly?:

    you’ll get one from every domain that receives email from yours.

    Today I have received another mail from outlook DMARC, i was referring to your reply again and found it very helpful/informative. thanks again.

    I wish sudonix 100 more great years ahead!