Julian's Blog

The Internet from Rocks — A High Level Explanation of Computers and the Internet

Note: This is quite a lengthy article so feel free to skim it / read it in multiple sittings!

Introduction

Imagine bringing someone from the bustling city of London in 1754 all the way to New York in 2022. They would be absolutely amazed by the world around them. The streets would be filled with towering skyscrapers, cars would be honking and whizzing by, and people would be rushing around chattering on their phones. As you bring them around the city, they would be filled with questions. How do they build buildings that tall? What are these mysterious metal contraptions zipping around? And what are those strange devices everyone is looking at?

Londoner Seeing New York

As you begin to talk about buildings built with steel frames or carts fitted with gasoline engines, he’s completely taken aback. How on earth has the world developed this much in such a short time? Walking around the city, you eventually come across a small coffee shop. Walking into the shop, the Londoner sees a man sitting down with a coffee poking at a metal device. Turning to you he asks what it is. "It's a laptop", you reply. "A laptop?" The Londoner asks incredulously. "What do you mean a laptop?”

Londoner Seeing Laptop

You begin to explain that a laptop is a machine that can do many amazing things. You explain how it can store information, play games, and browse the web. Still confused, the Londoner asks how it works. And that’s when you realize you have no idea. You’re not an engineer, but when asked about how the buildings were built you were able to give a rough answer. You’re not a mechanic, but when asked about how cars move, you could talk about the engine or the wheels or the chassis. Yet when he asks about the computer you don't even know where to begin. You know it runs on electricity and have heard some vague notions about coding, but aside from that you’re clueless. You don't know how it was invented, who invented it, or how it works. All you know is that you’ve spent most of your life using one and you've never needed to know any more. The computer is as mysterious to you as it is to the Londoner.

The Context

I’ve found that there seems to be a huge gap in people’s knowledge about how exactly computers work. Personally speaking, I didn't even begin to have an understanding of how they worked until my third year into a Computer Science degree, and I still find I have many gaps in my knowledge. This is because in order to understand how a computer works, you have to understand many different seemingly unrelated concepts, many of which are independently extremely difficult to understand on their own. You then have to take all those difficult concepts and fit them together somehow. It’s super confusing. I mean, just look at what a map of computer science looks like: Map of computer science Credit: Dominic Walliman

If I, a student studying computer science, still sometimes have a hard time connecting the dots, I would expect that the average person wouldn't even know where to begin. So with that in mind, I've decided to try and create a long-form blog post that would answer that fateful coffee shop question. This post is meant to take someone from having the vaguest ideas about how computers work to having a general understanding of all the important concepts and how they relate. This post should be read from start to end as the concepts build on one another. After reading this, you should come away with a high level understanding of all the different components of a computer, and how they fit together.

Computers are magic!

Any writing about computers can quickly go out of date. In this post, I will not dive too deeply into any specific implementations of technologies but rather focus on what each technology does from a high level. To go back to the analogy of a car, there have been many advancements in car technology in the past 50-60 years, but fundamentally they still work the same way. Computers are no different. In fact, technically, anything that a modern computer can compute, a computer from the 50s or 60s can also compute! See Turing machine. Cars and computers are the same

Before I dive into any specifics, it's important to get a general overview of what this post will explain, which will also give you an overview of how a computer works. This is important as often times we can figure out what an individual piece of the puzzle is, but can struggle to see how it fits into the bigger picture.

An Overview

What is a computer? Depending on who you ask you can get many different answers. But fundamentally, a computer is a machine that takes in some input, performs some work on that input, and then produces some output. A computer is a machine that can be programmed to carry out arithmetic or logical operations (more on what this is later) and store and move data. Just the ability to do these simple operations allow computers to do a wide variety of tasks from simple math to predicting the weather.

The oldest designed computers were not even electronic. Many consider devices such as the abacus or slide rules to be early computers. The first complex computing machine however, was designed by Charles Babbage in the 1830s and was a mechanical computer that was general-purpose, meaning it could do many different tasks. From there as computers developed, they became electronic and much faster.

Historic computers

There are two main concepts we have to understand when looking at computers: hardware and software. Hardware is anything physical and encompasses what we can see and touch. To understand hardware is to fundamentally understand what is physically inside a computer and how it works. Understanding hardware tells us how data and information are physically represented, stored and manipulated in a computer. Software are the instructions for that hardware. Unlike hardware, software is not something that is physical, you could write down a piece of software on a piece of paper or memorize it and store it in your brain. Software is a series of instructions that we give to a computer in order to make it do cool things. As you'll see shortly, hardware and software are intimately intertwined, and they each drive the other.

Hardware vs software

Anyway, without further ado, these are the concepts and steps that this blog post will talk about:

  1. We will start with boolean logic, a special type of mathematics that computers use to calculate things and store data.
  2. We will then see how this type of math can be automatically calculated using physical components like electronic circuits.
  3. After that, we will see how a large amount of these metal circuits can be arranged in a certain way that will allow us to create a basic computer.
  4. We will then take this basic computer and explain how programmers can give it instructions and make it do cool things.
  5. After this we will talk about how computers that we buy already have a program written on them called an operating system that automatically runs when we turn on the computer.
  6. From there we will zoom in on one particularly important program that the operating system can run: your internet browser.
  7. We will then give an overview of how your computer talks to other computers and explain what exactly happens when you type a website’s name into your browser's URL bar.

At the end of this, should you meet any rogue time travelers asking about laptops in coffee shops you should be able to give them a good answer or much to their confusion, simply refer them to this article. See recursion.

The steps the article will follow

The Language Computers Speak

In order for computers to perform simple calculations and store data, they need their own special alphabet. This alphabet will need to be able to represent any type of data and will need a series of operations that we can use to perform calculations on that data. Because we like to keep things simple and because computers are pretty dumb, we use a very basic alphabet called the binary alphabet. The binary alphabet only has two symbols in it, 0 and 1, which correspond to true and false. Yet it seems a little bit odd to call this an alphabet. Isn’t there only one alphabet with the letters a - z?

To help understand this, let’s consider another type of alphabet that we are all familiar with: the decimal alphabet of numbers. In the decimal alphabet there are 10 symbols of which we are all very familiar with: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. In the same way that we can represent the data of mathematics with these 10 symbols, we can also represent the data of computers with just the two symbols 0 and 1. It may seem confusing as to how exactly we can represent any type of data with two numbers, but in reality the simplicity of having just two numbers actually allows us to represent anything. Take for example all numbers from 0 to 15. With just four 0s or 1s we can represent every single number as shown below. We do this by assigning a unique combination of 0s and 1s to each number.

numbers to binary numbers

From there it’s quite simple to see how we can store letters and words in English. Now that we know that we can store any number using just 0s and 1s we can easily show how we can store any letter. Simply assign each letter to a number. In fact, this is what the American Standard Code For Information (ASCII) does! In the 1960s computer scientists decided to standardize what numbers stand for what letters, and these assignments are still used today.

letters to binary numbers

From here it’s a small leap to see how we can describe any arbitrary type of data (images, audio, video, etc.) with the binary alphabet. We just need a standardized format to specify how the 0s and 1s describe each type of data. This is why we have all the different types of extensions and file formats like .jpeg, .mp3 or .gif. Each one is simply a standardized way that we use the binary language to describe the type of data. Say for example we wanted to describe a simple image like the one below. The image has 16 squares, and each square can be black or white. We can simply encode that image as sixteen 0s or 1s, each corresponding to whether or not that square contains white (0) or if the square contains black (1). Extrapolate that to a grid containing thousands of squares and you have a black and white pixel art image.

example of binary number to pixel art

Note: A natural question that arises from this is how does the computer know how to interpret all the 0s and 1s if that’s all we have. How do we know if these 0s and 1s are numbers or text? Well, it’s quite complicated, but in essence, we can use another combination of 0s and 1s as labels that we put on a group of 0s and 1s. Before we look at any data we look at the label so we know what type of data it is and how we should interpret it. For example, we may attach the label 01 if it’s some text or a 1101 if it’s an image.

Now that we know that we can store any type of data using the boolean alphabet, we need to figure out some way to perform mathematical operations on that alphabet so we can actually do stuff with the data. Computers aren’t cool because they can simply store data, but are cool because they can do crazy calculations with the data like add huge numbers together or rotate and edit photos.

This is where the binary logical operators come in. There are only a few basic operations, yet using them in combination allows us to do some truly amazing things. These operators are kind of like the operators in normal math, plus (+), minus (-), divide (÷), and multiply (×). In normal math putting these operators in certain places gives us a new value. For example, putting a “+” between a 5 and a 5 gives us a 10.

Similarly, putting our boolean operators in certain places will also give us new values. The boolean operators are called AND, OR, and NOT. The AND and OR operators take in two values, the same way a plus (+) sign takes in two values, and gives us a new value.

Normal math vs binary math

The NOT operator is a little different. The NOT operator takes in only one value. This is similar to how the negative (-) operator in math takes in one value. The negative sign takes in one number and gives us the negative value of that number. If we apply the negative operator to 1 we get -1. Similarly, the NOT operator takes in one boolean value, 0 or 1, and gives us back the opposite of what we put in.

By using different combinations of these operators it is possible to do all kinds of things like add and subtract binary numbers together (remember the above representation of zero to sixteen) or even answer questions like is this number bigger than this other number. Below is an example of adding two binary numbers using these operators in a simple circuit. Don’t worry too much if you don’t understand how it actually works, it’s enough just to know that it can be done.

not operator and binary adder

Circuits and Wires

This kind of boolean math that we described above can be calculated with and represented by physical objects. This concept may seem strange at first, how are we supposed to compute this weird kind of math with physical objects? To understand this, why don’t we try to use physical objects to calculate normal math first. Say we want to calculate 5+5 with physical apparatus. We can use two cups of rocks where each cup has 5 rocks in it. We can then pour the rocks from both cups into a larger cup and count how many there are in the larger cup. If we did everything right, there should be 10 rocks in the larger cup, giving us our answer.

Calculating math with rocks

Similarly, we can calculate boolean math with physical objects. Take the OR operation for example. This is an operation that will output a 1 (true) if either of its two inputs are 1 (true). There are many different ways to physically calculate an OR operation. One simple way that is very easy to understand visually is with water. We will represent a 0 (false) with no water flowing and a 1 (true) with water flowing. We can easily construct an OR gate as shown below. In fact, it’s possible to create any kind of gate — something that calculates a boolean operation, with these pipes of water as shown in the following tweet. With an understanding of electricity and circuitry, it’s possible to create gates that calculate the same things with metal and electricity.

water logic gates

Electric gates however, are a bit harder to visually understand. But using electricity turns out to have many advantages over using water or other mediums. Electricity can flow much faster, and the gates can be made much much smaller. And this is all that computers physically are. An arrangement of millions and millions of these tiny electronic gates that each perform one simple boolean calculation. Kind of like with our water circuits, the flow of electricity represents a 1 (true) and the lack of a flow of electricity represents a 0 (false). Below is a photo of what those gates look like when translated into electronic circuits.

what gates look like

Image credits Z. Luo et al., Nature 579, 214 (2020)

Yet any random placement of these gates will not give us a computer. We need to place thousands and thousands of these gates in a very specially designed pattern for them to be useful to us. It seems absolutely mind-boggling that we can get from those simple gates to a computer capable of calculating crazy things, and it is. But if you think about it, there are many other examples of crazy behavior emerging from millions of simple parts.

Take the human body for example, we know that physically we’re only made up of very simple molecules, yet putting millions of these simple molecules in just the right configuration can create something amazing. To understand how these basic parts turn into computers, we have to first see how we can use these gates in combinations that give us simple circuits that do a little bit more for us. Kind of like how cells do a little bit more for our bodies than the basic molecules do. These small circuits can do things like add numbers, store or load data, or count numbers. Once we have these small circuits, we use them in other special combinations that will allow us to build the individual components of a computer.

human vs computer emergent behavior

There are many different types of special small circuits that we can build. I will list and briefly talk about a few of them just so you can get a rough idea of what they are and how they work. This isn’t super necessary to your understanding, but I think it’s a good stepping stone into our next section, the CPU. Below I’ve also included some images and diagrams of how they look and what gates we need to use to build them. Some important circuits include adders, multiplexers, decoders and latches.

  1. Adders do what they sound like they do, add two numbers together.
  2. Multiplexers allow us to choose what data we want to get. Think of it like a telephone operator. You want to call someone on the phone, so you dial a number, and then the telephone operator chooses what line to connect you to.
  3. Decoders allow us to selectively turn things on or off. You can think of a decoder as someone who works in a huge room with a thousand light switches. They wait for their boss to tell them a number and then they go and turn on that one specific light switch.
  4. A latch is a circuit that stores information. It takes in either a 1 or 0 and then stores it until we want to change it. Think of it as a coin in a safe. The coin is either heads or tails and will remain that way until we open up the safe and want to check what it is or change it.

diagrams of circuits

The Computer

With these simple circuits, it’s possible to create the components that make a computer. Modern computers are modeled after the von Neumann architecture, which was created in 1946 by John von Neumann. The von Neumann architecture is a clever way of splitting up the different parts of a computer. In this architecture we split up the computer into 4 main parts, some of which you may recognize the names of. We have the CPU, the memory (RAM), external memory (storage) and input output devices (mouse, keyboard, display etc.). The most important parts for us are the CPU and the RAM. You can think of the CPU as the brain of the computer and the RAM as the short-term memory of the computer. Normally, the CPU controls everything and does all the calculations while the RAM serves as a quick storage place for the CPU. Kind of like a mathematician and his notebook.

CPU vs RAM

We can create both the CPU and RAM out of those simple circuits that we previously looked at (adder, multiplexer etc.). The way that they’re configured is extremely complex though and you don’t really have to know how it’s set up in order to get an understanding of how it works. If you’re interested I’ve included a diagram below.

CPU Diagram

Credit: http://simplecpudesign.com/simple_cpu_v1/index.html

Now that we have all this information out of the way we can finally start understanding how a CPU (the brains of the computer) works and how this relates to programming! Stay strong, we’re getting to all the good parts that will really solidify how we get from this hardware and logic to the modern computers we all use and love.

A key idea that we need in order to understand how CPUs work is the idea of a clock speed. You might have actually seen them before when purchasing a computer or looking at a new Mac. That little 4.6 GHZ thing is the clock speed. You probably assume that higher speed = better performance, but what does that idea of “clock speed” actually mean in terms of our CPU? Well, it turns out that it’s extremely important.

Remember how we learned that the CPU is made up of all those little circuits? And those in turn are made up of all those little gates? Every configuration of those circuits and gates will only do one calculation, yet we need to constantly change the configuration in a very specific way in order to calculate more complex things that need multiple calculations (think adding multiple numbers together). Yet at the same time all those changes need to be synchronized across the whole computer otherwise the operations and data will get out of sync.

This is where the idea of a clock comes in. Every single one of those little gates is synced to a central clock in the computer that tells each circuit when to change configuration. By doing this we make sure all configurations “change” at the same time. The faster we make that clock run, the faster the computer will run (theoretically). That little 4.6 GHz number means that the clock for that computer can change configurations at up to 4,600,000,000 (4.6 BILLION) times per second. This is absolutely astounding. It’s sheer speed that we can change those configurations allowing us to do billions of simple calculations per second that give computers their superpowers.

xps cpu screenshot

This idea of clock speed gives us a big clue to understanding how a CPU works. From it, we know that a CPU works in a sequential synchronized way. So what does a CPU actually do? Well, a CPU has a set list of operations that it can do. This set list of things was decided by the CPU manufacturer. For example, some things that it can do can include adding two numbers, comparing if one number is bigger or smaller than another number, reading and writing things to the RAM (the little memory notebook). All a CPU does is load in a list of instructions, and then executes each instruction, one by one. These instructions can tell it to do many different types of things, including loading more instructions or storing data in RAM! The CPU also has one other really cool superpower that lets it do much more complex things: the ability to make decisions about what it should do next. This isn’t some terminator type sentience, but rather an extremely simple form of asking questions and then changing behavior based on the answer.

What a cpu does

I’ve mentioned the term RAM several times and talked about how it’s a place where we can store and load data (binary data). Before I go any further I just wanted to quickly address what I mean by this. As a program runs, we often times need to keep track of many different things at once, and keep them close at hand. This can include things like numbers or words or images we’re currently working on. Say for example we’re using a program to write a document, that program will need a place to store the different letters of the text we’re working on. That place is RAM.

This may sound a bit confusing as previously when I described the Von Neumann architecture I talked about RAM as well as something called external memory / storage. These things are actually separate different places we can store data. RAM tends to be a lot smaller than your external storage, but it’s also a lot faster to read and write from. When we’re running programs that are working on data that we will immediately need, we don’t want to waste time storing it in the slow external memory. Instead we keep it in RAM.

You can think of it as doing work in a library. We keep all the books and papers that we will immediately need on the table that we’re sitting at. We also keep a little notebook where we can do calculations or write things. Our desk is our RAM. It’s small but contains everything we currently need. When we find we need some other book or paper we don’t immediately have, we have to get up and walk around the library to look for that book. It’s a lot slower to get information from these books, but the entire library can hold many thousands of books. This is our external memory.

ram as a library

With an understanding of the CPU executing instructions and RAM holding our instructions and data, we can begin to understand what the word “programming” actually means. We program a CPU by giving it a program to follow. This program is nothing more than a sequential list of the operations that we want it to execute. The operations that we give to CPUs are very very specific. We have to tell it exactly which line and page in the memory we want to write things to, where we need things loaded, what numbers to add and where to store things. Below is a table of common instructions you can give a CPU (no need to understand it though).

Common mips instructions

Programming and Applications

This is really all a CPU does. It takes in instructions and executes them in order. Computers can do billions of these instructions per second and it’s this speed that gives them their seemingly magical powers. It also has access to the other parts of the computer like the RAM or memory, or the monitor and keyboard, and it can send data to any of those other places. This is enough for the CPU to do all the cool things we know and love (like read interesting blog posts).

Below is an extremely simple example program that we can give to the CPU to calculate some simple math. It loads in the number 25 and writes it down in a place called $t2 (li stands for load immediately). It then loads in the number 12 and writes that down in a place called $t3. It then adds those two numbers together and writes that down in $t4. If we went over to $t4 and read it, it would have the binary number 100101 written in it, which corresponds to the number 37.

mips

For a long time, manually giving CPUs instructions instruction by instruction was all we could do. Old computers from the 50s had to have physical punch cards fed into them where each card corresponded to one of these instructions. On each of the punch cards (pictured below), you would specify which instruction you were telling the computer to do, and on which data you wanted that instruction to run on.

punch card programming

When you fed the cards in, a physical device would check which holes you had punched, and then relay that information to the CPU, which would interpret that information as an instruction. Then the device would move on to the next card and interpret the next instruction. When you had a program in those days you did not want to drop your program. It could take several hours to put the cards back into the correct order. Nonetheless, while somewhat limited by the technology, computer programmers of those days were able to achieve some truly incredible things. Calculations that were needed to create the hydrogen bomb were done with punch card programming. Quick note: back in those days of programming, there was no sense of running applications as we know it now or doing multiple things at once. On one computer, one program would be fed in at a time and the user would have to read the result of the program before feeding in another program.

punch card scanning

This type of programming is extremely cumbersome and hard to do. You have to be intimately familiar with the CPU, its specific instructions, and you have a limited amount of places you can store stuff, meaning you had to be very precise in where your data is written. You also had to spell out to the CPU exactly what you needed it to do with extreme care. Programmers of this time were craftsmen and created programs with extreme precision and control over their machines.

This type of programming is still occasionally done today in extremely important pieces of software — maybe something that NASA would use to land a rocket (Note: they wouldn’t use physical paper punch cards anymore but instead type up the instructions). But for the average programmer, it’s very difficult and tedious to program this way. As time went on, programmers became more and more fed up with the effort needed to do even the most simple task. They began to make small programs that would make their own programming lives easier. They began to use shorthand ways to refer to a series of instructions, and began naming instructions into more intuitively understandable names. Eventually these kinds of optimizations culminated into the creation of the first programming languages.

improving programming

A programming language is not some series of magical enchantments. Think about normal languages that we use. To English speakers, Chinese may seem very alien, but fundamentally Chinese is just an alternate way of representing the data in our minds. Similarly, a programming language is also nothing more than just an alternate way of representing our thoughts to the computer. A programming language allows us to speak to the CPU in a way that’s more convenient and easy for us. When a programmer writes a program in a programming language, that program is first run through another special program called a translator that translates it into the CPU’s language. This translates our high level thoughts into specific instructions from the list of instructions that the CPU can execute. The first widespread programming languages emerged in the 1950s. In 1954 IBM created FORTRAN, a programming language that is still in use today. The development of FORTRAN allowed for many different people to start programming, and allowed programmers to create much more code much faster. Below is a simple little Fortran game where the user has to guess a randomly generated number. I’ve annotated the code if you want to try to understand it.

fortran program

Credit: https://opensource.com/article/21/1/fortran

With the creation of FORTRAN and other programming languages, programs became larger and larger, and took more and more time to run. Additionally, as people began to realize the power of computers they became extremely popular. Because of all of this, we needed to create a better way of interfacing with the computer than having one giant computer that we program with punch cards. Two main developments came out of this, the creation of terminals and keyboards, and the creation of operating systems.

Before we dive into either of those I wanted to give a quick disclaimer. In the upcoming sections, I’m going to talk a lot about different programs. Admittedly, there is a bit of hand waving that we have to do here. I’ll often say a program was created that did this or a program was created to do that, and not really explain how it works. This is because as computers became more advanced these programs became extremely complex and large. Therefore before I do this hand waving I wanted to talk a little bit about what exactly a program is.

Programs are a bunch of lines of code. These lines of code are written by a programmer in a programming language. Normally, different functionalities of the program are broken up into different individual parts. For example in a messaging program we may have some code that handles writing text, and we may have some other code that handles sending text. Almost no programs are created entirely by one person. The cool thing about programs is that we can bring in other people's code and programs and use their code in our projects. So a lot of programming is simply joining up other people's code in a unique way while also doing some of our own computation.

To explain how a program works, imagine we could write a program that could build a house. As input to the program we talk to our customer about what kind of house they want: do they want it big or small, wood or concrete etc. We would then create a bunch of smaller programs that each take care of one part of building the house. One program lays the foundation, another one paints. Yet when creating our programs we don’t have to make everything from scratch, we can get the program to use a shovel that some other guy made, or some saw that another company made. This is how most programs work. We take in some input from the user through text, or buttons on a screen or voice or whatever. We then write small programs that each handle one task that we need to do. These small programs normally have some way to communicate with each other. Each of these small programs will probably use some tools that another person has made. After we develop and write all the little programs we package them up into one box and release it to the world.

program analogy

Anyway, now that we know what a program is, we can talk about those two key developments. The first development, terminals and keyboards, allows users to type their programs into computers instead of creating physical punch cards. While these initially started off as being electronic typewriters that physically printed on paper (teleprinter), they quickly moved to being electronic screens. In the 70s, these rapidly took over as the main way that we interacted with computers. Yet the terminals of this era are still quite a far shout from the user-friendly, modern computer interfaces that we use today. These terminals connected to the CPU via special input/output wires and would receive a constant stream of binary data (0s and 1s) from the CPU. If you remember from before, we know that binary data can really represent any type of data. The CPU would tell the screen which pixels were meant to be illuminated and which ones weren’t. How it determined this was with an additional program. This program would convert the text the user was writing into mappings of the letters onto the screen. The screens of this time were limited to text only, and couldn’t really represent images or graphics very well. At the same time the CPU would listen out for input from the keyboard. This would also be transmitted as binary data through a direct connection which would tell the CPU which key was being pressed.

old school terminals

The other key development came with the creation of operating systems. With the programs and computers of the 50s and 60s, programmers had to reserve time slots on huge computers for them to run their programs. They would prepare their programs on their own time and then convert them into punch cards. They would then bring a stack of their punch cards during their allotted time slot and feed it to the computer. This is kind of inefficient. We needed a better, faster way of coordinating which programs the computer ran. The first innovation in this area came in the form of program queues. This worked by having automated systems in the computer that would automatically run programs one after each other. Eventually this evolved into even more complicated software systems that used a technology called time-sharing. This was a technology that allowed one computer to simultaneously run multiple programs at once. This was a huge deal at the time and changed the way programmers worked. Now, multiple programmers could use the same computer at once, have their own terminal, and get their programs run immediately. Running a system like this requires very careful management and control of the CPU, and thus required special programs to be developed that handled this. These special programs operated the entire system, and thus were named operating systems.

operating systems

As computer technology developed in the 70s and 80s, they became more powerful, smaller, and cheaper. These factors would eventually lead to the personal computing revolution, where it became possible for normal people to own their own computers. Yet, computer systems at the time still required you to have special degrees or training to use and were much too complex for the average person. This led to the development of new operating systems that had much more functionality built into them. Things like graphical user interfaces were created, allowing users to see what they were doing. At the same time the operating systems began to handle more and more of the complex parts of running applications, until it became as simple as pointing and clicking icons that represented programs. Security started to get built-in, along with easier ways to load and run different programs. Operating systems for personal computers were completely revolutionary at the time, yet are often overlooked today. Whenever you turn on your computer and you see that Apple, Windows or Linux logo, know that your computer is automatically running that operating system program that manages everything for you. Operating systems are wonderful, yet we’ve kind of forgotten about it as time has gone on. Just look at these images of the Windows 95 launch. Nowadays operating system updates just bother people and we’ve seemingly developed the reflex of hitting the “remind me in 24 hours” (this remind cycle can last years).

excitement over operating systems

Computer Networks and the Internet

From the beginning of time, humans have sought to develop technologies that would better allow for the transfer of data between individuals. The first of these was developed tens of thousands of years ago: yelling at your friend from across the forest. This method is great for many reasons, data can be transferred almost instantaneously and the hardware required to facilitate this data transfer is built directly into our bodies (What would the software be in this case?). One drawback of this method is that as distance increases, our data transfer ability decreases — it’s hard to yell at your friend from across a continent. Eventually we developed writing and messengers, which allowed us to transfer data over huge physical and temporal distances. This method however, was quite slow. A message from Rome to Beijing could take several weeks to send. Thousands of years later we reached our next big breakthrough: telegraphs and telephones that transferred data at the speed of light. This was a huge advancement and let us get almost instantaneous data transfer across the globe. Yet it was limited in its bandwidth and data type transfer. It was kind of limited to speech and text and couldn’t do things like transfer large programs or files very effectively. With computer networks and the internet we’ve finally found a solution to many of these problems.

evolution of communication technologies

In the 1960s, several universities and other organizations began to have multiple computers spread across their campuses. When someone wanted to transfer a file or program between two computers they had to use a sneakernet, meaning they had to have someone physically load the file onto magnetic tapes and run it across campus. The speed of your data transfer would depend on the speed that person ran. Computer programmers are lazy, and would rather spend months developing an automated way to do this than spend the 2 minutes it took to walk. Therefore someone had the bright idea of connecting different computers with wires and setting up some programs that would allow them to transfer files and programs through these wires (remember, we’re still only transmitting 0s and 1s). This led to the first computer network created in 1965 at MIT.

lazy programmers

Eventually, this led to the desire to connect universities and organizations that were spread around the country. This was funded by the U.S. Defense Department, and computers across the nation were connected using infrastructure that already existed: telephone lines. It worked by sending 0s or 1s directly to other computers over telephone lines at an extremely fast speed. These 0s and 1s were encoded into audio form, which is where the weird old dial-up internet sounds come from.

ARPAnet

Credit

Just like with other files or data types, certain formatting and protocols had to be developed in order to make those 0s and 1s being sent mean something (TCP/IP). In 1969 the first message was sent on this proto-internet. A UCLA student sent the message “lo” across the network of connected computers and phone lines. He was attempting to send the word “login” but the system crashed, which just goes to show how little has changed since those days. Yet it was almost a happy mistake. The word “lo” means “used to draw attention to an interesting or amazing event” and indeed what came in the next 50 years was indeed both interesting and amazing.

UCLA standford lo message

After several years of work improving the system, we were able to consistently send messages huge distances through this network. Eventually people at home started to connect their own personal computers to this network and the internet was born. Yet in the early days of the internet, we were still missing several of the key features that make it the awesome living virtual world that we have today.

One of the most important developments was the creation of several file formats of internet files, later called websites. When you connect to a website your computer is really just downloading several files from somewhere and then using a special program called an internet browser to display those files in a fancy way.

This idea of formatting websites and files started with the creation of HTML or Hypertext Markup Language. This was a special way of writing text so we could tell computers what kind of formatting we wanted on it. This includes things like making certain text bold or struckthrough or italicized or adding links and font sizes.

html and browser

Later on, we added a couple of other technologies, namely CSS (Cascading Style Sheets) and Javascript. This allowed us to make much more complicated websites. CSS allows us to tell the browser what we want the website to look like. Make this text blue, put this text on the side here, make a black box around this image. Javascript allowed us to make our websites interactive. Click this button to send an email, type in here to search something, hit enter to calculate something.

But what I really want to stress is the fact that these are all simply files of text/code that the computer downloads and interprets. Below I’ve included some screenshots of several types of this text/code and included descriptions of what they do. When your browser downloads these files it interprets them in a special way and displays the website in the manner that’s intended. Below this is real code to give you an idea of how it works!

html css js

Theoretically, if you wanted to, you could send a website through the physical mail system. Just print out all your HTML, CSS and Javascript files and mail it to someone. They could then type this text into their browser and the browser would render the website!

website through mail

Of course, modern websites are extremely complex and can contain thousands of lines of code, requiring hundreds of developers to create. But theoretically if I mailed you the code of this website, you could load it up and copy all the lines of code into your browser on your computer.

Additionally, often times websites and browsers maintain a kind of link between them. Think of this as a phone call rather than a letter. This means that after you initially download the website, a permanent communication link is maintained. Using javascript, data can be sent or requested through this link, making the website interactive. This is important for websites like messaging websites, who need to constantly send and receive data and update their website to show new messages.

The next question we have to answer is how the files for different websites are routed and sent to the right place. It’s easy enough when there are only two computers connected to a set of wires, but when there are millions it can get very tricky. In this way, the internet can be thought of as an automated, complex, postage system.

The first thing that we have to understand is that every computer connected to the internet has an address called an IP address. The same way that entering your postcode and address allows postmen to deliver data to the right place, the IP address allows the automated internet systems to deliver data to the right place. These addresses can be things like google.com or https://julian.bearblog.dev/. When you type in that address your browser sends a little message to that address requesting whatever file you put after the address. For example google.com/about will send a message to the address “google.com” requesting the “about” file. Google will then promptly send back the corresponding HTML, CSS and Javascript files that you have requested. Note: if you don’t request any file specifically, you will usually get back the default file, normally called “index”. You can test this out by entering any website and adding /index.html after the URL and you’ll see that most of the time you’ll just get the default web page.

ask google asking for

So what’s happening behind the scenes when we request these files? If your computer is connected to the internet there are several things that automatically happen. First, you’ll connect to your local router in your home or office or neighbor's office that you got the wifi password from when you moved in because your internet “hasn’t been set up yet” but you’ve secretly still been using. This can be through a physical cable like an ethernet connection, or more commonly, through wifi. Wifi is just a wireless connection to your router through radio waves, kind of like two little walkie-talkies being built into your computer and your router. Your router is then connected with a cable to your local ISP (internet service provider) like AT&T, Comcast or Spectrum.

These ISPs will connect to huge routers located around the world, which will automatically read where you’re trying to go, and then route your message to the appropriate place. Once your message reaches the appropriate place, if you’ve formatted everything correctly, your message will get sent to a server. A server does exactly what it sounds like it does, serves data. If you’ve typed in www.amazon.com there is another computer somewhere in the US that is owned by Amazon that is built and programmed to handle your request and serve files. It will check what your username is, and then automatically get the web page along with anything in your cart and package it up and generate the files. This is all done through Amazon’s backend code. It will then take all those files, and send it back through those very same routers that you sent your original request message from. Seemingly instantaneously, those files will reach your computer in binary form. Your computer will then interpret the binary as HTML, CSS and Javascript files, and promptly render the website into your browser. The amazing thing is that all of this can happen in just seconds!

asking for website

This is really all it takes to get the Internet from rocks. As a species, we’ve successfully managed to put a bunch of rocks and metal in very specific configurations across the globe in order to create an automated network of data transmission. If you think about all the work and research it’s taken to get to the point, it’s truly amazing. Now, should you ever meet any time travelers, you should now be well equipped to explain what these shiny metal devices that we all seemingly poke away at all day are. Hopefully that preoccupies them for a while and gives them the ability to google things, instead of exposing your complete lack of understanding about how most things in our society work.

conclusion londoner

Author’s Note: Many of the concepts that I’ve explained have been simplified in order to make them more understandable. I’ve also skipped over and made several leaps over concepts that aren’t super important to your general understanding, but if you’d like to learn more or ask me any questions please feel free to reach out to me! Additionally, there may be things that I have expressed that are incorrect or not 100% accurate. Please reach out to me if there are any corrections to be made. Final point, I want to give credit to Tim Urban's Wait But Why blog for inspiration on the format and style.