Programming Languages For Beginners

Programming Languages For Beginners

In the last post we discussed the topic of algorithms. That was fun and all, but how do we turn them into useful programs? That is where programming languages come in. The goal of this post is to serve as a broad overview of programming languages for beginners to get their feet wet.

Beginners often wonder which programming language they should learn first. While there are some choices that are objectively better than others, any general purpose programming language will have concepts applicable to other languages. Having a solid grasp of those fundamentals will be very helpful in any programmer’s journey.

We’ll be covering three sub-topics in this overview programming languages: generations of programming languages, paradigms of programming languages, and execution types of programming languages.

Generations of Programming Languages

As more great minds entered the fields of engineering and technology, they developed more advanced programming languages. With today’s modern programming languages, powerful capabilities can be unleashed with just a few text commands.

However, this wasn’t always the case. Modern programming languages have evolved from their earlier predecessors which were much more difficult to use and understand. Let’s discuss them one by one.

Machine Language (1st Generation)

Computers can only read one language: machine language. This is true even today when using modern programming languages. The difference is that modern programming languages provide extremely useful abstractions that prevent us from having to write machine language.

Machine Language
Machine Language

Machine language is represented in binary, or in other words, ones and zeros. We touched on this topic a bit in my post about basic computer literacy. Fortunately, we rarely have to deal directly with machine language these days. We are standing on the shoulders of giants.

Assembly Language (2nd Generation)

Next came assembly language, which is a huge step up from machine language. Instead of writing ones and zeros, we have the luxury of using text to tell the computer what to do. This allows us to give instructions in a way that is much more natural for human brains, and let a program called an assembler translate the instructions into machine language.

Assembly Language
Assembly Language

Unfortunately, we are still limited by the fact that in assembly language, one assembly code instruction translates to one machine instruction. This is makes for tedious work, despite the obvious advantages over machine language.

Unlike machine language, assembly language is still used directly today. Low-level embedded systems, device drivers, and real-time systems are often programmed in assembly language. Assembly language gives the programmer greater direct control over the machine instructions, which can be useful when optimizing for performance or working with hardware.

Structured Programming Language / High-Level Language (3rd Generation)

Eventually, people started creating higher-level languages. These people saw the ideas of assembly language and took them one step further. These high-level programming languages still use words instead of binary numbers, but one high-level instruction may translate into many machine instructions. Structured programming concepts are built into these languages by default.

Java, a structured, high-level programming language.
Java, a structured, high-level programming language.

This all provides a huge boost to programmer productivity, as it allows the programmers to focus more on what they want the computer to do, and not on how the computer performs the task. These languages are much more beginner friendly than assembly language.

A computer cannot execute high-level code directly. Instead, we must compile or interpret the code. More on that in a moment.

Most of the popular and recognizable programming languages in use today fall into this category. This includes languages like C, C++, C#, Java, Go, Ruby, JavaScript, and many more.

Domain Specific Programming Language (4th Generation)

Finally, we have domain specific programming languages. A domain specific programming language is a high-level language created specifically for solving a certain kind of problem.

A couple of examples include CSS and SQL. We use CSS specifically for adding style to web pages, and we use SQL specifically for interacting with databases.

SQL, a domain specific programming language
SQL, a domain specific programming language

Another example would be JSON, which is just a subset of JavaScript that’s commonly used for sending object data over the internet via HTTP.

These languages make it very easy to solve problems within their domain, but they are not very useful for solving problems outside of their domain.

Complex software, like web applications, often use a mix of both high-level languages and domain specific languages to solve problems.

Now let’s move on to our second topic: paradigms of programming languages.

Paradigms of Programming Languages

There are many different high-level programming languages, and each one has its own unique flavor. There are several different programming paradigms that languages can use. Procedural, object oriented, and functional are three of the most popular paradigms.

While we can neatly define these paradigms, the programming languages themselves often borrow ideas from all of them.

For example, while Java was clearly designed as an object oriented programming language, recent versions contain more functional concepts, like lambda expressions. It’s beneficial to become familiar with many programming paradigms in order to become a well rounded developer.

Procedural

This is the traditional method of programming, and is conceptually the most straightforward. It simply involves giving a sequence of instructions to solve a particular problem. The programmer places his/her focus on the algorithm itself. A few examples of programming languages that lean heavily on this paradigm include COBOL, Pascal, and C.

Object Oriented

In the object oriented paradigm, we organize our code into “objects”. An object is basically data and tasks you can perform with that data. We write programs in terms of how the different objects interact with each other.

This is a powerful paradigm because it allows us to model the real world and break up large problems into smaller, encapsulated problems.

Object oriented programming concepts
Concepts of object oriented programming (OOP). You could spend a long time studying this subject alone.

Much of programming involves creating abstractions that make our lives easier, and object oriented programming assists in that regard. A few examples of programming languages that lean heavily on this paradigm include Java, C#, and C++.

Functional

The functional paradigm is a declarative style of programming that treats computation as the evaluation of mathematical functions. It avoids changing state and mutable data. I have to admit, I’m much less adept with functional programming than the other two paradigms.

While academia tends to get excited about functional programming, object oriented programming is much more common in the business world.

However, functional programming has many advantages and can be an extremely powerful tool in the right hands. It often makes programs less complex and easier to reason about.

Languages often borrow from functional concepts without forcing you to embrace pure functional programming in its entirety. The idea of “passing a function to a function” alone is a great tool to have on your tool belt.

A few examples of programming languages that lean heavily on this paradigm include Haskell, Lisp, and Erlang. I’d love to hear about your experiences with functional programming in the comments, especially if you have any tips for helping others think in that paradigm.

With that section wrapped up, let’s move on to our final topic: execution types of programming languages.

Execution Types of Programming Languages

There are three main execution types of programming languages. A program can be compiled, interpreted, or ran on a virtual machine.

Compiled Languages

In compiled languages, a program called a compiler creates an executable file that runs directly on the machine. This executable contains machine code, so the original source code is no longer necessary once the executable has been created. Because of this, compiled programs usually run pretty fast. An example of a compiled language is C or C++.

Interpreted Languages

In interpreted languages, a program called an interpreter reads the code, follows along, and does what it says. We need the source code every time the program is ran, and we don’t create an executable.

Interpreted programs are usually slower than compiled programs. JavaScript, the language I’ve been using recently in my game tutorials, is an example of an interpreted language.

Virtual Machine

A language designed for a virtual machine is like a cross between a compiled and interpreted language. We compile the program, but into something called “byte code”, not machine code. The virtual machine then acts as the interpreter for the byte code.

These programs are usually slower than machine code, but faster than interpreted programs.

A couple of examples of languages that run on a virtual machine include Java and C#. In Java, the virtual machine is called the JVM (Java Virtual Machine). In C#, the virtual machine is called the CLR (Common Language Runtime).

Summary

As a beginner, the sheer number of programming languages out there can be overwhelming. It can make it difficult to know where to begin learning.

Fortunately, there are some ways of categorizing programming languages that allow us to move past all the cruft and focus on the fundamentals.

As years have gone by programming languages have become faster, easier to work with, and more powerful. Yet COBOL, an unfashionable language which first appeared in 1959, still accounts for more than 70 percent of the business transactions that take place in the world today.

This just goes to show that the business world prizes working software above all else, and a professional programmer is likely to encounter all different kinds of programming languages throughout his/her career. It helps to understand how these different languages have evolved and what capabilities they offer. Doing so will prepare you to learn whatever hot new language comes along next.

Thanks for reading as always, and don’t forget to subscribe!

What Is An Algorithm?

What Is An Algorithm?

Welcome back! This is the third post in an introductory series about learning programming. In the last post I covered how we can represent data and operate on it in the context of a computer program. That makes for a natural bridge to today’s topic: algorithms. So, what is an algorithm?

Al Gore Rhythm Pun
No, it’s not this. This is an Al Gore Rhythm.

You might have an idea of what an algorithm is based on things you’ve heard other people say. I’ve watched friends scroll through their Facebook news feed and complain about how the “the algorithm” is always showing them the same posts or the same people. While this is true, algorithms affect our lives in many other places besides social media. Every time we play a video game, search Google, or swipe a credit card we are relying on algorithms to produce the results we desire. Software and algorithms go hand in hand.

Simply put, algorithms are precise, deterministic instructions for performing a specific task with data.

The key words here are instructions and data. An algorithm must use data that a computer can represent. We talked about what that data looks like and what operations are available to a computer for manipulating it in the previous post. Now we need to discuss how to build a complex set of instructions for the computer to carry out. We will cover three fundamental building blocks for creating algorithms: variables, input/output, and control flow.

Variables

One thing we neglected to cover when we were talking about data is the concept of variables. In a computer program, a variable is a placeholder for data. It represents a location in memory where a value can be stored. We can define variables by giving them a name and, depending on the programming language, possibly a data type.

Storing data in variables is a key component of creating useful algorithms. We often need to save off pieces of data to be used in a later step in the algorithm, and variables provide us a method of doing that.

Variables are powerful because they allow us to store the state of previous calculations, but they are a double-edged sword. As the number and scope of variables grows, so does the complexity of the program. It’s best to be judicious about their use in order to write programs that are understandable and maintainable.

Input and Output

It’s great to be able to store data in variables, but where does that data come from? And where does it ultimately end up? This is where input and output comes in. You might see it abbreviated as “I/O”, and hear people say it like, “Eye Oh”. You could make fun of these people, but that wouldn’t be very nice.

I/O is an extremely important feature for allowing programs to be useful to humans. If I didn’t input my personal data into Facebook, they wouldn’t be able to sell it, err I mean, I wouldn’t have a personalized profile to connect with friends and family.

If computers couldn’t output data to the screen, my enemies would have no way of knowing when I’m dancing over their lifeless corpses in Fortnite. This is all very important stuff.

Input

Input comes in lots of different forms. One example is data that users enter manually, like their name and email address when subscribing to my blog. Input could be arrow key presses and mouse movement when playing a first person shooter. It could even be a credit card number when swiping one’s credit card at a fast food restaurant.

Output

Likewise, output can come in many forms. It could be the computer graphics displayed on your monitor when playing a video game or watching Netflix, an Excel file, or it could even be a web page, like the one you are currently reading.

At the most basic level, input and output could even be as simple as a single piece of data. A large part of programming useful algorithms is figuring out how to chain together numerous smaller components using their inputs and outputs.

Control Flow

The last building block we need to cover is control flow. Here we are dealing with determining the order in which instructions are carried out. While there are many ways to give instructions, not all of them are good.

When programming it’s important to keep in mind who your audience is. The compiler/interpreter is one member of that audience, but yourself and other programmers are arguably more important to cater to. If programs are difficult to comprehend they will be difficult to modify and improve. That’s why we use something called structured programming.

Before the idea of structured programming came along it was like the wild west out there in Nerd Land. People used GoTo statements in their code to just jump around to wherever they wanted. Sounds awesome, right? No restrictions, unlimited freedom. What could go wrong?

Well, it turns out this was a pretty terrible idea. People wrote spaghetti code that was unmaintainable. It got so bad that a dude named Edgar Dijkstra came along and said, “Hey! Stop this madness! It’s harmful!“. And just like that, structured programming was born.

Edsger Dijkstra calls goto statements harmful.
Here we see Edsger complaining about having his yearbook photo taken.

Structured programming is the concept of restricting which kinds of control patterns are acceptable when giving the computer instructions. Most modern programming languages force these restrictions upon you, and for good reason. There are two main advantages to doing this. Both of these advantages are to help make programmers’ lives easier:

  1. Structured programming makes it easier to avoid mistakes.
  2. Structured programming makes it easier for you or somebody else to understand your program.

There are only a few different patterns used in structured programming, but they are sufficient for expressing any kind of algorithm you would like to implement. We will refer to these patterns as sequence, choice, and repetition, and we will use flow charts to help describe their behavior. The rectangles represent actions, and the diamonds represent decisions.

Sequence

What Is An Algorithm? An example demonstrating the sequence pattern.
Sequence Pattern

The sequence pattern is the simplest of the three. It simply involves executing one task after another in succession.

To the right is an example demonstrating an algorithm for finding more joy, losing weight, and making chicks dig you, using only the sequence pattern.

Choice

Choice represents a fork in the road. We ask a question, and then one of two things will occur depending on the answer. The answer is expressed as a piece of Boolean data, that is, either true or false. There are two basic variants of this pattern. The one-way branch either does something or nothing at all, while the two-way branch chooses between two options.

Below is an algorithm for selecting the greatest quarterback of all-time, using only the choice pattern. Both the one-way and two-way branches are demonstrated.

What Is An Algorithm? An example demonstrating the choice pattern.
One-Way and Two-Way Choice Patterns

Repetition

With repetition, a set of instructions are repeated until some condition is met. “Loop” is another term people will use to describe this pattern.

There are two basic variants here as well, the pre-test loop and the post-test loop. The difference between them is when the condition is checked. Here is an example illustration demonstrating the algorithm used by the New England Patriots.

What Is An Algorithm? An example demonstrating the repetition pattern.
Pre-Test and Post-Test Repetition Patterns

Unfortunately, this particular example appears to be an infinite loop, which we will try to avoid in our programs if we don’t want them to crash. We can avoid such scenarios by combining these three patterns into more complicated ones that allow us to create more complicated logic. Let’s put everything we’ve learned together and take a look at how to do that with a full example algorithm.

Example Algorithm: Fizz Buzz

Fizz Buzz Algorithm

Fizz Buzz is a good example algorithm to demonstrate these concepts. Our goal is to print out every number from 1 to 100. However, we need to replace each multiple of three with the word “fizz”, and each multiple of five with the word “buzz”. If the number is a multiple of both three and five, then we need to replace it with the word “fizzbuzz“.

To the left is an example flowchart demonstrating an algorithm for Fizz Buzz. If you examine it closely, you’ll realize that we are using all three of the building blocks covered in this post: variables, input/output, and control flow. We are also using data operations we learned about in the previous post.

We start by creating the “number” variable and setting its value to 1. Then we create a pre-test loop where we check if the number variable is less than or equal to 100.

Inside this loop are various decisions for determining when to output “fizz”, “buzz”, the contents of the number variable, or a new line. We use concepts like the modulo operator, the AND operator, and equality checks to make these decisions.

We continue performing these calculations until the value of the number variable causes the initial pre-test condition to return false.

After this program completes, the screen would have output that looks like this, with the pattern continuing all the way up to 100 (or in this case, “buzz”):

1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz
... 

Summary

We started this post by posing a simple question: what is an algorithm? We learned that an algorithm is a set of instructions for performing a task with data, and we talked about the three building blocks for creating them: variables, input/output, and control flow. In our final example, we learned how to combine these building blocks to create complicated algorithms and model them with flow charts.

In the next post we will start learning about what we use to turn flow charts into code that computers can understand: programming languages. Until then, make sure to subscribe to stay up to date with the latest content.

Representing Data in Computer Programs

Representing Data in Computer Programs

Welcome to the second post in my introduction to computer programming series. In the first post we learned about basic computer literacy and wrote our first “Hello World!” program. We also talked about how computers need information, which we usually refer to as data, to operate on. This post will cover the topic of representing data in further detail. We will be working with data often, so it’s important to know what it looks like.

Now, seasoned programmers may accuse me of oversimplification on this topic. They have a valid point, but my goal is to help newbies learn the craft, not to bog them down in the details. In due time your programming journey will fill in many of the gaps I am intentionally leaving blank.

Data Types

Let’s start talking about representing data by discussing the three main types of data: Numbers, Text, and Booleans (true/false). While there are variations on these, these three types serve as a useful place to begin our discussion.

Numbers

Number pad for entering data

Numbers could be integers, but they could also have decimal points in them. Or they could be negative. Different types of numbers are not always represented in memory the same way, but we can save those details for a later discussion.

Text

Text just refers to a series of characters. In the “Hello World!” example we covered in the previous post, “Hello World!” was a piece of text data. We usually call a piece of text data a string because it is a string of ASCII or Unicode characters. These are just standards for mapping characters to numbers so we can represent them as binary numbers. From now on we will refer to any piece of text data as a string.

Booleans

Booleans are the simplest of the three data types. Much like a transistor, they can only be found in one of two states, in this case true or false.

The best thing about a boolean is even if you are wrong, you are only off by a bit. 😜

Operating On Data

We’ve now seen that there are three main ways of representing data. There are also three main operations that a computer can perform on data. Let’s refer to these three operations as Math, Comparison, and Boolean. These three operations are ways of using the data we already have to create new pieces of data.

Math

Data: Math is a wonderful thing
Get off your ath, let’s do some math!

We can use math to create new numerical data from the data we already have. For example, adding the numbers 5 and 2 gets us a new piece of numerical data, 7. Here are some mathematical operations we can perform:

  • Addition, often represented in code by the + symbol.
  • Subtraction, often represented in code by the symbol.
  • Multiplication, often represented in code by the * symbol.
  • Division, often represented in code by the / symbol.
  • Modulo, often represented in code by the % symbol.

Modulo might be a foreign term to you. It simply means performing division and returning the remainder.

Comparison

Comparing data car analogy
The car on the right is greater than the car on the left.

We can compare multiple pieces of data to create new boolean data. For example, asking if 2 is less than 7 gets us a new piece of boolean data, true. Here are some comparisons we can perform:

  • Less than, often represented in code as <
  • Less than or equal to, often represented in code as <=
  • Equals, often represented in code as ==
  • Greater than, often represented in code as >
  • Greater than or equal to, often represented in code as >=

Boolean

We can use boolean logic to create new boolean data. For example, it is true that 2 is less than 7, and it is false that 2 is less than 1. Asking if either of these values are true gets us a new piece of boolean data, true. However, asking if both of these values are true gets us a different answer, false. For further study on this, read up on boolean logic and De Morgan’s laws. Here are the three ways of combining boolean data:

  • AND, often represented in code as &&
  • OR, often represented in code as ||
  • NOT, often represented in code as !

The AND operation checks if both values are true, while the OR operation checks if any of the values are true. The NOT operation returns the opposite value of the original piece of boolean data.

Why Representing Data Matters

Wow, we covered a lot of ground! But it feels a bit academic, doesn’t it? The point of all of this is not to bore you to tears, but to show you that the amazing and wonderful computer programs that you use on a daily basis are just doing basic operations like these on the data you provide them. From your favorite websites to your favorite games, there is nothing magical going on.

However, combining these operations so that something useful happens can be a challenging task. This activity is called creating algorithms. Simply put, an algorithm is a set of instructions for the computer to follow. When we write code, we are really just writing algorithms that the computer can understand. We will talk much more about algorithms in the next post.

If you had trouble with any of this, leave a comment so myself or somebody else can help you out. And remember to subscribe so that you can stay up to date with this series and the rest of the content I publish.

Basic Computer Literacy

Basic Computer Literacy

Do you ever hear people discussing computers or technology and think to yourself, “Wow, I have no idea what these nerds are talking about, but they sure do sound smart.”? Or maybe even, “Wow, I wish these nerds were socially aware enough to change the subject.”? If you’ve ever found yourself in those shoes, you might benefit by developing some basic computer literacy.

The computer is incredibly fast, accurate, and stupid. Man is incredibly slow, inaccurate, and brilliant. The marriage of the two is a force beyond calculation.

Leo Cherne

I hope to make this post the beginning of a longer series about learning how to write computer programs. I’m starting out by covering basic computer literacy because it is important to have that foundation in place first. After all, how can we write programs for computers if we don’t really know what they are? Let’s start by tackling a few key terms.

Hardware vs Software

The first important distinction we need to make is the difference between hardware and software. To put it simply, hardware has physical form. You can hold it in your hand and tinker with it. Some examples of hardware include a keyboard, a mouse, a screen or monitor, a graphics card, or a CPU (Central Processing Unit).

Hardware vs software comparison
Some hardware running some software.

Software, by contrast, does not have a physical form. It exists only as information stored inside the computer. Some examples of software include Facebook, Twitter, Gmail, Microsoft Excel, your web browser, my silly JavaScript games, and of course Flappy Bird. As budding computer programmers, our goal is to learn how to produce robust, useful software.

What Is A Computer?

A computer is a device composed of hardware that is powered by electricity. Its purpose is to store, retrieve, and process data. Data is just another word for information. Without data, a computer would just sit there with nothing to do. A computer could be many things, including your smart phone, your laptop, or a web server.

Programmable

One unique aspect of a computer is that it is programmable. This is the property of the computer that we are the most interested in because we want to control how data gets operated on through the programs we write.

A computer displaying code.
We want to write programs, like this one! Except we will be putting our leading curly braces on the next line because we are not barbarians.

Most modern computers are designed based on the von Neumann architecture. Why am I telling you this? Mainly to look smart, but also because this architecture is what allows us to store both data and the instructions for operating on it (programs) in the same memory. This flexibility is what allows for so many awesome programs to run on the same computer. It would be a real shame if computers had to be totally rewired each time you wanted to switch between programs. Before our boy von Neumann came along, that’s what we had to do.

Digital

Computers are digital. What this means is that the data is stored in memory via a very large number of switches. These switches are more formally known as transistors. Each of these transistors represents something called a bit, and can be found in only one of two states, on or off. These bits are then gathered into groups of 8, which we call a byte. Computer memory is just a big pile of bytes. As the number of bytes grows we start using bigger units to measure them, like kilobytes, megabytes, gigabytes, and terabytes.

A light switch representing a transistor.
It would take 10 of these light switches to represent the decimal number 1000 in binary. Luckily, transistors are much smaller than light switches.

The two different states of a bit are represented by a 1 and a 0 respectively. You may have heard people refer to “ones and zeroes” when talking about computers. This digital aspect of computers is what they are referring to. Thus, understanding how binary numbers work can be helpful, especially when writing programs at lower levels of abstraction. Every piece of data we deal with, from personal demographic data to the colors you see on the screen, is ultimately represented by a series of ones and zeroes stored in memory. Fortunately, this design detail is usually hidden from us when we are programming with higher level programming languages.

Programming Hello World

That about wraps things up, but first I want to give you a small taste of things to come if you decide to learn more about programming. You are going to write your first program, and you don’t even have to leave this page!

When learning to program in a new language or environment, computer programmers often write a program called “Hello World!”. This is the simplest program you can write. It simply prints out the text, “Hello World!” to the screen. You are going to write your first “Hello World” program directly in your web browser.

If you are reading this on mobile, now would be a good time to switch to your laptop or desktop computer. Once you have done so press the F12 key on your keyboard. This will launch the developer tools for your browser. In the example I used Mozilla Firefox, but this keyboard shortcut should work in whichever browser you are using. You can also get there by right clicking on your browser window and choosing the appropriate option. Next, navigate to the console tab of the developer tools and type in the following: alert(“Hello World!”).

In this case, alert tells the browser to alert the user of something via a dialog box. We are passing it some data, in this case the text, “Hello World!”. This data is what we are going to show to the user.

Basic Computer Literacy: Alerting "Hello World!" in the console.

Once you have that typed out, press the Enter key on your keyboard. You should see something pop up that looks similar to this, minus my shameless plug:

Basic Computer Literacy: Hello World alert dialog appears on the screen.

Summary

Congratulations! You have gained some basic computer literacy and you have written your first program in a language called JavaScript. JavaScript is one of the most popular languages in the world, and it is running on almost every website you visit. It’s also a language I have used extensively in my game tutorials.

Leave a comment below if you had trouble with any of this so myself or somebody else can help you out. And remember to subscribe so that you can stay notified of new content and move on from basic computer literacy to an even deeper understanding of how they work. Continue on to the next post in the series to start learning more about data.