So, what happens if you enter a URL into the browser’s location bar and press ENTER? A very many things, and eventually a website is loaded and rendered.

Let’s check the networking part!


URL: Uniform Resource Locator - an address of a resource.

URL parts


The protocol that the client has to use to retrieve the resource. For websites it’s usually https (or http), but email addresses can have mailto, phone numbers have tel, and so on. Some systems let you regirster your own URL schemes, too.

Can be omitted, in this case the protocol of the current resource is used. (Like referencing external JS files using the same protocol for security reasons: <script src="//"></script>.)

Domain name

… or IP address. It specifies the web server (computer) to contact. More about it later.


There can be multiple web server software running on the same web server (computer) this number specifies which one to use. By default 443 for https and 80 for http, if this is the case, it can be omitted.

Path to resource

Representing a physical file (like /posts/web-basics/index.html), or any other resource (i.e. /api/users/dyuri).


Key/value pairs separated with the & symbol. The key and the value should be URL encoded.


It’s like a bookmark inside the resource, like #url-parts pointing to this section of this document. It isn’t sent to the server, used only by the browser, or the application running inside.

The server

The server (computer) is identified by an IP (IPv4 or IPv6) address - but in most cases the domain name is used in the URL - how is the IP address resolved from a domain name?

The most common answer is the DNS (Domain Name Service), but it’s a bit more complicated. The mechanism is called Name Service Switch - NSS, and it’s a way to specify how to resolve resource names to resource ids in unix-like operating systems.

Note, that the network layer of modern Windows has borrowed a lot of functionality from BSD systems ;)

Using NSS, you can define the order of the services used to resolve domain names. For example:

hosts: files dns nis

Which means: for host name resolution first the files, then dns and if all fails the nis service is used. nis is basically not used anymore, I’ll talk about DNS soon, but the interesting part is files. It means that by entering an IPaddress hostname pair into the host file (usually /etc/host) you can manually assing any IP address to any hostname - which can be really useful for testing purposes: you can set the domain name of any production environment to your own computer ;)


There are some very useful IP addresses that you need to know when you connect to a network - and one of them is (or are) the IP address of the local DNS server. There are also public DNS servers with well known IP addresses, like and (by Google) or (by Cloudflare).

Prefer to use the DNS server provided by the local network. In some environments it can resolve local hostnames too, and there might be also privacy concerns using public ones - the DNS server’s owner can basically track what sites are you visiting just by analyzing your DNS requests.

The local DNS server doesn’t know all the domain names ever registered, but they can propagate the query further to their DNS server, or eventually to the so called root DNS servers that can help to find the DNS provider for the given domain. Once a DNS server has the answer, it can cache it for some time (defined by the owner of the domain name).

Networking - overview

Now that we have the IP address of the server we want to contact, let’s take a step back, and see how computers communicate.

OSI model

In the early days, engineers wanted to standardize the conceptual model of networking. They defined 7 layers:

  • physical
  • data link
  • network
  • transport
  • session
  • presentation
  • application

They were used is some weird cases - I personally used X.25, FCAM and CMIP, but I’m old as hell, and even I was surprised to met them -, but they never gained much popularity. But if we almost close our eyes and look at the current TCP/IP stack through a narrow gap, we can identify these layers more or less.

Physical layer

The physical entity - copper wire, optical cable, or the electromagnetic space - between the network appliances. Raw bit streams are travelling in this layer - as fotons, current or changes in radio waves.

Data frames are transmissed between two nodes connected by the physical layer. The nodes have some kind of (local) addresses - for example the MAC address in case of ethernet networks. Routing in this layer is done by switches.

This layer is responsible for device access (MAC - Medium Access Control), basic error checking and encapsulating network layer protocols (LLC - Logical Link Control).

Network layer

This layer provides the functionality to transfer packets from one network node to another connected to a “different local network”. Routers are used to find a path between these nodes and transmit the packets. (There can be multiple paths, and each packet have a different route.) Message delivery is not necessary reliable. Basically host to host communication.

Transport layer

The transport layer is responsible to deliver data from one application to another - running on a different computer.

Transport protocols can be connection-oriented - usually providing a reliable connection between the applications -, or connectionless, that doesn’t track if the communication was successfull or failed.

They are also split the arbitrary long data into segments that fit into the packets the underlying layers can transfer in one turn.

For example a typical MTU (Maximum Transmission Unit) in local Ethernet networks is 1500 bytes. The IPv4 and the TCP headers are both minimum 20 bytes so the maximum segment size is 1460 bytes for each packet. Even most emails are longer than that.

Session layer

Handles user sessions, typically implemented explicitly in applications, like FTP, SMB, …

Presentation layer

This layer transforms the data coming from the application to something that can be easily transferred over the network. Encryption (like SSL) or compression (like gzip) can be done in this layer too.

Application layer

Most protocols that are directly used by applications are here (but they usually provide the functionality of the Session and the Presentation layers too). For example HTTP, SMTP, FTP.

The modern internet (TCP/IP)

As I mentioned above, the OSI model is not really used in our modern internet, many modern protocols are implementing features of multiple layers, but the lower layers can be more or less similar.


The IP - Internet Protocol - is basically a network layer protocol, delivering packets from the source host to the destination host based on the IP addresses in the packet header via routing.

Currenty there are two versions in use, IPv4 and IPv6. There never were IPv1, 2 or 3, and even though the higher version numbers are used, mosts of those projects are obsolate or abandoned.


UDP - User Datagram Protocol - is a connectionless transport layer protocol. It does not provide reliable transmission (but it can be implemented in higher levels), but it’s quick and stateless, suitable for simple query-response tasks (DNS, NTP), modelling lower layers over IP (tunnels, or NFS) or - due to the lack of retransmission delays - basically the only option for real time communication (VoIP, online games, RTSP).


TCP - Transmission Control Protocol - is the connection oriented, reliable transport layer protocol of the IP family. It is used where the reliability of the connection is essential - like for file transfers, emails, remote administration (SSH), or the web (well, till HTTP/3 at least).

Both UDP and TCP are using sockets - a combination of IP address + port - to establish app-to-app communication, basically multiplexing the data streams to the network.


There are more ways to get from one host to the other, and routers - using routing algorithms - are responsible for that. They can be configured in a static way (probably your own computer, or your SOHO router works this way) or use some kind of algorithm to find the shortest/quickest/cheapest/… way to the other host.

The routing table of my own computer (or something like that):

$ ip route show
default via dev enp5s0 proto static dev wg1 scope link dev enp5s0 proto kernel scope link src dev wg0 proto kernel scope link src dev wg1 scope link dev wg1 scope link
Local networks

There are (were) pretty few IPv4 addresses, so some of them aren’t used by public computers (and aren’t handled by routers in a same way as public ones). This way you can use these addresses for your home/office appliances without risking to accidentally replace google search by your washing machine ;)

These networks are:



The Hypertext Transfer Protocol is an application layer protocol in the IP protocol family, built for the World Wide Web. Its first version was developed by Tim Berners-Lee at CERN in 1989.

It’s a request-response protocol (typical client-server model). The client (for example the browser) sends a request to the server which sends back a response.

The HTTP request

POST /ide/megy/az/adat HTTP/1.1
Accept-Encoding: gzip, deflate, compress
Content-Type: application/x-www-form-urlencoded
Content-Length: 26

  • method - GET, POST, PUT, DELETE, …
  • path - the path of the requested resource
  • protocol version - well, most probably HTTP/1.1 if you can see it this way :)
  • headers - Host is mandatory for HTTP/1.1, everything else is optional
  • body - after an empty line, optional

The HTTP response

HTTP/1.1 200 OK
Date: Wed, 13 May 2015 11:12:13 GMT
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/html
Server: nginx/1.7.1

  • protocol version
  • status code
    • 1xx - informational
    • 2xx - success
    • 3xx - redirection
    • 4xx - client error
    • 5xx - server error
  • headers
  • body - after an empty line, optional


HTTPS isn’t an other protocol, but it uses SSL/TLS to secure the connection and encrypt the data flow. Certificates might cost money, but there are free options as well, like Let’s Encrypt.


More efficient expression of the HTTP semantics, but:

  • metadata (headers) are compressed and re-used, so they require much less space
  • uses a single TCP/IP connection per server (but with multiple virtual channels)
  • there’s an option for server push (but hard to use properly, so basically nobody uses it)
  • browsers require TLS (https) to support it (although it’s not mandatory)


Very similar to HTTP/2, but uses QUIC instead of TCP, which is a reliable transport layer protocol over UDP, developed by Google. The SSL connection establishment is baked into the handshake process, so it can be much faster than HTTPS over TCP, and has many other advantages.

Personal note about newer HTTP versions

For small/medium websites, HTTP/1.1 is pretty much enough. Implementing (the protocol itself) is pretty easy, you can easily issue simple HTTP calls even by hand (via telnet for example), and a basic HTTP/1.1 server that reports the temperature for example can be implemented in a few lines of C code running on a microcontroller.

HTTP/2 and 3 are on the other hand huge beasts, with (basically mandatory, and computation heavy) TLS, header compression, connection multiplexing and so on. You most probably don’t want to implement any minor part of it for your hobby project - and as I said, you won’t pretty much need id for any low traffic site. Big websites (like Google or Facebook) are the ones who desperately need them.

So if you are in an environment - using standard webserver software, like nginx - where HTTP/2 or 3 is available, use it. But if you can only use HTTP/1.1 for embedded systems, it will still work. And if you want to connect it to the public internet, there’s always the option to put it behind a proxy server, that supports HTTP/2+ and strong TLS.