The trend in programming styles is shifting away from object-oriented paradigms toward functional and procedural approaches. Languages like Go and Rust have omitted traditional object-oriented features such as classes and inheritance, while Swift encourages the use of plain structures over classes. Functional programming styles dominate in TypeScript and Scala.
Procedural programming is also experiencing a resurgence, with growing interest in languages like Zig, Odin, and Jai, alongside the popularity of Go and Rust. This trend prompts questions about the advantages and disadvantages of procedural programming compared to object-oriented and functional styles. The topic invites exploration of whether adopting a procedural style could be beneficial.
Full Transcript:
with some comments in square brackets[]
This is the return of procedural programming. I’m Richard Feldman. Back in the 1990s, in my hometown, I used to go to Borders Books and Music a lot. Anyone ever been to a Borders? I really think… okay, a ton of people. Wow. So, I would walk around, and this was sort of before the internet. We didn’t have the internet in my house, and nobody I knew had the internet. My dad had it at work, at the university, but it was mostly for email. So, the main way I learned about new programming things was from books. I remember walking around Borders and seeing all these books about this hot, new, super-hyped thing: object-oriented programming. Books like Head First Object-Oriented Analysis and Design. Sometimes you’d see exciting new releases of object-oriented languages like Java 1.1. There were also object-oriented programming books aimed at different demographics. It was just this overwhelming tidal wave of hype and excitement around object-oriented programming. It was so clear that this was the future, and the old procedural programming way was a thing of the past. The future was OOP.
I remember walking around Borders, thinking this, and it kind of stuck in my mind. I got into the industry, became a professional programmer, and did a lot of object-oriented programming. So, imagine my surprise — and imagine telling my past self — about a couple of years ago, in 2019. Here’s Andrew Kelly talking about the Zig programming language, which he created. It’s one of several new, up-and-coming low-level programming languages. He’s talking about how they’re trying to make a better C, not like C++. They’re intentionally making this new programming language not object-oriented. Zig’s not the only one. You also see Rust, which is not an object-oriented language, and it’s very popular, getting even more popular. Odin, Jai — these are two other low-level, systems-level programming languages from the last 10 to 15 years. The only low-level language in that group being talked about here that is object-oriented is Carbon, which is explicitly designed to be a C++ successor and has C++ interoperability, so they kind of couldn’t get away from it.
This is a really far cry from what I remember of OOP being the obvious future, the thing that everybody was going to be doing. So, we’re talking about procedural programming here — the thing that came before OOP. In this talk, I’ll be discussing to what extent it…
What do their trends look like? We’ve seen a 30% increase in C usage over the past six years. Part of that could be explained by C++, as there’s likely some double-counting where people who say they’re doing C++ also answer that they’ve done C in the last 12 months. Go, on the other hand, has seen a 100% increase. I don’t have a double-counting explanation for that — it just seems like a lot of people are adopting Go. Anecdotally, at this conference, I’ve had three different conversations with random people in the last couple of days, and two out of the three said they’re adopting Go at work. I think it’s because people like Go, and maybe it’s because they now have generics. Either way, Go seems to be increasing in popularity, and it’s not an object-oriented language — it’s procedural programming.
Then there’s Rust. I gave Rust a 100% increase, but that’s being generous because six years ago, Rust wasn’t even on the chart — it was essentially zero. You could say it’s an infinite percentage increase, but that doesn’t seem reasonable, so to be fair to the other languages, we’ll cap it at 100%. The point is, Rust went from not being on the chart six years ago to now having 10% of respondents say they’ve used it in the past 12 months. That’s an absolutely massive increase. None of these languages — Go, Rust, or C — have classes, objects, subclasses, or inheritance. They’re all procedural programming.
Since we’ve seen a mix of ups and downs on the object-oriented side, and all the mainstream procedural languages on this survey have seen some form of increase, it means that over the last six years, among Stack Overflow survey respondents, object-oriented programming has lost ground to procedural programming. Some object-oriented languages declined, while none of the procedural ones did — they all went up. What’s going on here? I thought object-oriented programming was supposed to be the future forever.
This is what we want to talk about today: why is procedural programming becoming proportionately more popular now? To be clear, object-oriented programming still rules the roost — it’s still the most popular paradigm by a healthy margin. The question is about the trend. Why is procedural programming becoming more popular? Why is it coming back, even though it’s the thing that came before object-oriented programming?
Here’s the outline of the talk: The first and largest section will discuss differences in features between procedural and object-oriented paradigms, breaking it down into two categories of object-oriented programming. A shorter section will cover differences in style between the two, and the shortest section at the end will address what changed — what factors, when combined, led to the resurgence of procedural programming.
We’ll start with differences in features. To set the scope of this talk, I’ll focus on procedural and mostly object-oriented paradigms, with just a brief mention of functional programming. This talk is mostly about imperative programming, so while I’ve talked a lot about functional programming in the past, this discussion isn’t about that. I also won’t get into other paradigms like logic programming.
The focus here is really on procedural and object-oriented programming and their differences. A language paradigm is often about a combination of language features and the style in which you use them. We’ll start by discussing the features. The procedural feature set and style revolve around the idea of using procedures instead of go-tos. If you look at hardware, it doesn’t have a concept of procedures — this is an abstraction we invented on top of hardware, built using conditional and non-conditional jumps, the concept of the stack, and so on. The idea is to organize programs into procedures, which has become a universal practice. You might even think, “What do you mean by paradigm? That’s just programming,” because this paradigm has stuck around so pervasively.
Functional programming could be seen as procedural programming with restrictions — essentially, procedural programming while avoiding mutation and side effects. Object-oriented programming is a bit trickier to define. Let’s talk about that. Alan Kay, a past speaker at ET and the person who coined the term “object-oriented,” gave a keynote on this. He coined the term at the University of Utah in the 1960s. In 1966, he described his influences when developing object-oriented programming. He was inspired by Simula, the first language to introduce objects and classes. Simula started as a domain-specific language for simulations, but in its second edition, the creators realized this style could be useful beyond simulations and introduced the terminology of objects and classes. Before that, they used terms like systems, procedures, or processes.
Alan Kay was also inspired by the design of the ARPANET (the precursor to the Internet), seeing objects as tiny servers. His background in biology and mathematics influenced his thinking, using metaphors for how cells communicate and mathematical algebras for organizing systems. He described his approach as an “architecture for programming,” contrasting it with other schools of object-oriented programming that don’t emphasize architecture as much. In 1967, when someone asked him what he was doing, he said, “It’s object-oriented programming,” and the term stuck. However, he later clarified that many styles labeled as object-oriented don’t align with his original vision.
In 2003, Alan Kay provided further clarification: to him, object-oriented programming means only the following: messaging (we’ll discuss what that means), local retention and protection and hiding of state process (likely referring to encapsulation), and extreme late binding of all things. He said this could be done in Smalltalk and Lisp.
I have to admit, hearing Alan Kay’s definition of object-oriented programming threw me for a loop. For one, I don’t really think of Lisp as an object-oriented language — it’s often called the first functional language. Additionally, Lisp explicitly added the Common Lisp Object System (CLOS) later, which was a separate thing. If Lisp was already object-oriented, why would it need to add an object system later? Also, Lisp doesn’t use terms like “objects” or “classes.” I’m not sure, but apparently, to Alan Kay, Lisp is one of the two languages that can do object-oriented programming.
He also said something even more confusing: “There are possibly other systems in which this is possible, but I’m not aware of them.” He said this in 2003, by which point Objective-C and Ruby had been around for multiple decades (Ruby since 1995). Both of these languages support messaging, encapsulation, and extreme late binding, so I’m not sure what he meant. Regardless, let’s talk about what “messaging” means, because it’s not something I associated with object-oriented programming back in the 1990s.
In the object-oriented context, messaging refers to the idea that calling a method on an object means sending a message to that object. You send some piece of information to the object, and the object receives it, then decides what to do with it. The object decides in real time, at runtime, how to handle the message based on its current state and the message itself. This is similar to how an HTTP server works: when it receives a request, it can decide on the fly what to do with it, including returning a “not found” response, and it can change its behavior dynamically at runtime. This is exactly Alan Kay’s concept of objects. He often described each object as a tiny computer or server, and object-oriented programming, to him, was a recursive design where everything was like “computers all the way down.”
Ruby explicitly includes this idea of messaging. If you look at the official Ruby documentation, it describes calling a method as sending a message to an object so it can perform some work. For example, the syntax my_object.my_method
sends the my_method
message to my_object
, and any arguments included are also part of the message. Yukihiro Matsumoto (Matz), the creator of Ruby, was directly influenced by Smalltalk and wanted to include this messaging concept in Ruby.
Alan Kay also mentioned “extreme late binding in all things.” If you’re not familiar with this idea, it means that the list of methods an object supports and what they can do can change in any way at runtime. This is an extension of the messaging idea: when an object receives a message, it can decide on the fly whether it supports that method, what arguments it accepts, and what their types are. You can change these dynamically — for example, every 30 seconds, you could scramble the methods an object supports, delete half of them, and add new ones. This is a critical part of Alan Kay’s vision of object-oriented programming. If you’re not doing this — if you don’t support extreme late binding — then, to him, it’s not object-oriented programming.
In other words, when you see syntax like my_object.my_method
, you should have no idea at compile time what that will do — or even if it will do anything. It might not be supported, or it might change dynamically at runtime. This is a key implication of Alan Kay’s vision of object-oriented programming, and it’s completely at odds with static type checking. Static type checking is where, at compile time, you have an exact list of all the methods that are supported. The compiler checks them and gives you errors (like red squiggles in your editor) if any of them aren’t supported. These methods shouldn’t change at runtime if you want to be able to type-check them. If you do want to change them at runtime, fine, but now you’re outside the world of static type checking.
I mention this because there’s been a strong trend toward increased use of static type checking, which directly conflicts with the idea of extreme late binding. Smalltalk, influenced by and descended from Simula, was one of the first languages to embrace this idea. Smalltalk came out in 1972, the same year as C. Then, in 1985, Brad Cox created Objective-C. He started with C and wanted to add productivity features to it after reading about Smalltalk in Byte magazine. He decided to add Smalltalk-like features to C, calling the result Objective-C. He included message passing, late binding, and other concepts from Smalltalk. Even in modern Objective-C documentation from Apple, you’ll see references to message passing: “A message send sends a message with a simple return value to an instance of a class (AKA an object).” This idea of messaging persists today, though for some reason, Alan Kay doesn’t consider Objective-C to fit his definition of object-oriented programming, while Lisp does.
Python, despite being widely considered an object-oriented language, didn’t adopt these ideas. Guido van Rossum, the creator of Python, wasn’t strongly influenced by Smalltalk. Instead, he had a good experience with Simula and wanted to incorporate Simula’s ideas — like classes and objects — into Python. However, he didn’t include the messaging concept that was central to Alan Kay’s Smalltalk-inspired vision. This is one of the first examples of a language considered object-oriented that doesn’t fit Alan Kay’s definition at all. Python doesn’t do messaging, though it does support late binding.
Ruby, on the other hand, was designed to be more object-oriented than Python, specifically in the Smalltalk sense. Yukihiro Matsumoto (Matz), the creator of Ruby, was aware of Python and wanted to create a scripting language that included the idea of messaging, as we saw earlier. Ruby explicitly incorporates this concept, making it more aligned with Smalltalk’s vision.
Another language worth mentioning is Self. How many people have heard of Self? (A few hands go up.) Don’t worry — you’ll recognize the language that descended from it. Self was a programming language that descended from Smalltalk. The authors of Self wrote a paper titled Self: The Power of Simplicity. Unlike Smalltalk, Self didn’t include classes or variables (meaning members). Instead, Self adopted a prototype metaphor for object creation. How many people have heard of prototypal inheritance
? (More hands go up.) How many people have heard of the most popular prototypal inheritance language, JavaScript? (Everyone raises their hand.) Exactly. Self’s legacy is that it inspired JavaScript’s original inheritance system.
I’m using JavaScript’s original 1995 logo here for reasons that’ll become apparent later. You might know JavaScript by its more modern logo, which looks like this. The concept of late binding and avoiding static type checking has been a notable trend. However, even before TypeScript emerged and the shift away from extreme late binding toward static type checking began, we had already started to see other significant changes in programming practices. For example, Self’s idea of prototypal inheritance
influenced JavaScript, but ES6 introduced classes, which became the preferred, official way to handle inheritance and object-oriented programming in JavaScript. Prototypal inheritance
is still supported, but when you read tutorials, people often focus on the newer, shinier ES6 class syntax. The reason they added classes is that the rest of the world, except for JavaScript, uses class-based inheritance. Prototypal inheritance
, while cool at the time, doesn’t seem to have stuck around and might end with JavaScript.
Dart is another example of this trend. As a direct offshoot of JavaScript, Dart skipped prototypal inheritance
entirely and opted for class-based inheritance instead. This reflects a broader pattern in object-oriented programming: certain ideas, like prototypal inheritance
, were experimented with, gained mainstream popularity, had their moment, and are now being phased out. While there are many possible reasons for this shift, it seems the concept simply didn’t endure as strongly as others
We’ve also seen a move away from messaging. Ruby and Objective-C explicitly included messaging in their designs, but Objective-C’s successor, Swift, developed by Apple for iOS development and Cocoa, doesn’t include messaging except for backward compatibility with old Objective-C code. Similarly, Crystal, an offshoot of Ruby, is a very Ruby-like language that explicitly uses static type checking and static dispatch, moving away from extreme late binding.
It’s worth noting that Ruby and Objective-C were two of the languages with the most significant decreases in usage over the past six years in the Stack Overflow survey. This is likely due to Rails for Ruby and Swift for Objective-C, not necessarily because of problems with the languages themselves. However, the industry does seem to be moving away from messaging and late binding. Languages like Python, Ruby, Crystal, Swift, and TypeScript either have type checking baked in or have added official language extensions for type checking. This doesn’t bode well for Alan Kay’s vision of object-oriented programming, which he defined as messaging, encapsulation, and extreme late binding.
Encapsulation is still very much a thing, but messaging and extreme late binding don’t seem to be the future. In fact, while I wouldn’t say they’re dead, they seem well past their peak in terms of popularity.
Okay, so we’ve seen some differences in features here. The Alan Kay style of object-oriented programming emphasizes messaging and late binding. In contrast, procedural programming doesn’t really have a counterpoint to that — it’s more like, “We just don’t do messaging and late binding.” That’s what came before. When I say procedural programming is rising in popularity, it’s not because people are excited about a shiny new way of doing things. It’s more like, “Maybe we’ll just not do those things and go back to the old way.” If you wanted to frame it positively, you could say procedural programming is about plain old functions and plain old data being passed around between them.
Now, there’s another branch of object-oriented programming that’s probably what most people, including myself, think of when they hear “OOP.” This isn’t the vision of Alan Kay, who coined the term, but rather the work of Bjarne Stroustrup. Does anyone know what he’s most famous for? (Someone says C++.) Yes, that’s the language I’m about to talk about next. Stroustrup had used Simula in the past and also worked with C. He decided to combine the two in a language he called “C with Classes.” Has anyone heard of C with Classes? (A couple of hands go up). Okay, one and a half people. I didn’t have a logo for C with Classes because it was a short-lived language, but I made one up.
C with Classes was essentially what the name suggests: he took the C programming language and added classes to it. As a bonus, he also added stronger static type checking. This already puts it in direct opposition to Alan Kay’s vision of object-oriented programming, which emphasized dynamic behavior and late binding. Stroustrup described C with Classes as a “medium success.” It worked — it added objects and classes to C — but it was mostly used by his friends and didn’t gain widespread adoption. He saw this as a problem because he didn’t want to keep maintaining it if it was just going to be a small, medium-success project. So, he decided to take it further.
Well, I don’t want to shut it down because that would hurt all my friends, and I also don’t want to keep maintaining it on my own. I bet if I add a bunch of other features on top of the object-oriented stuff, maybe more people will find it useful, and then they can help me maintain it. So, he did just that and decided to rename it from “C with Classes” to “C++”. How many people here have heard of C++? (Everyone raises their hand). It’s like JavaScript again — cool. Yes, C++ became slightly more popular.
What’s interesting here is the subtle distinction: if you just took C, which was already a very popular language, and added classes to it (the OOP stuff), that wasn’t enough to make it popular. He had to add all the other non-OOP features before it gained traction. This tells us something: was it the OOP part that caused C++ to get popular? Obviously not, because when it was just “C with Classes”, nobody had heard of it. What made C++ popular was the other stuff he added on top. Before that, when it just had the OOP features, it wasn’t popular. Yet, in the 1990s, people thought, “Oh yeah, OOP is big — just look at C++.” These ideas got conflated, but the experiment showed that adding OOP to C didn’t make it popular. It was the same guy, too.
At any rate, C++ became quite a popular language. Alan Kay, however, was not a fan. In 1997, he said, “I made up the term object-oriented, and I can tell you that I did not have C++ in mind.” Fair enough. Be that as it may, it seems like this is the family of OOP that ended up taking over, whether or not the originator of the term was happy about it.
One of the most famous languages descended from C++ — and explicitly designed to appeal to C++ programmers — is Java. I’ve carefully organized this slide so you can see the Java logo next to another logo: JavaScript from 1995. Here’s a bit of backstory: JavaScript was originally supposed to be Scheme-like, a functional programming language dialect — Lisp, no less. That’s what Brendan Eich was planning to develop at Mozilla for use in the Netscape browser. Then Java came out, and Sun Microsystems launched a massive marketing campaign, spending hundreds of millions of dollars to hype Java. The hype was real. Mozilla said, “Do you see this hype machine? Put ‘Java’ in the name, make the logo look like Java, and make the syntax Java-like. Just Java-ify what you’re doing.” Brendan Eich said, “Alright, I guess”. So, it became JavaScript. It was originally called LiveScript, and the rest is history.
PHP is another offshoot of C++. Rasmus Lerdorf, who is also Danish, was doing C++ programming and web programming and felt that C++ was too clunky for web development. He ended up inventing PHP. Later on, C# became Microsoft’s version of Java, with some differences, but this whole family of languages really comes from Bjarne Stroustrup’s version of C++. Stroustrup wasn’t interested in messaging or extreme late binding. In fact, he was quite into static typing, and as we’ve seen, this version of OOP ended up being the most widely used in industry.
Stroustrup gave a talk called The Design of C++ where he discussed his motivations, which date back to the 1980s. He highlighted some of his goals with C with Classes, which persisted through C++. He talked about program organization as his primary concern. Dennis Ritchie did a great job creating C, but it didn’t provide a clear way to organize programs. Stroustrup also wanted to maintain C’s runtime efficiency, availability, portability, and interoperability. He didn’t want to sacrifice those things, but program organization was his main focus. When I talk to people about what they like about OOP, this is one of the things that commonly comes up.
Alan Kay had a vision for an architecture of programming, while Stroustrup was more about program organization. What Stroustrup popularized — and what many people say they like about OOP — is the idea of combining actions on data types. You organize them in the same place: you have a class with pieces of information and methods that operate on that data. It’s a natural way to couple these two things together. Even in modern programming languages that aren’t object-oriented, like functional or procedural languages, you often see this organizational strategy using modules for encapsulation instead of classes.
Evan Czaplicki, the creator of the Elm programming language, used to say something I found really useful: “A good module is usually built around a particular data type.” The module will expose that type and include functions that work on it. This is more of a convention in the modules world, whereas in the classes world, it’s a very strong default and cultural norm. If you have modules and not classes, you can still organize your code this way, but it’s not as strongly encouraged as it is in C++ and other object-oriented languages.
Another thing that comes up a lot in OOP teaching is the “pillars of OOP”. This is something you’ll often see in beginner tutorials, but when I talk to people in the industry, they’re like, “Oh yeah, the pillars… I think I’ve heard of that”. Does anyone here know what the pillars are offhand? (Someone mentions encapsulation, inheritance, and polymorphism). Right, it’s not something people can just rattle off. The pillars are encapsulation, inheritance, and polymorphism. These are often emphasized in teaching OOP, but in practice, they don’t always come up as explicitly in industry discussions.
If you look up the pillars or principles of OOP, you’ll always see these four — or sometimes just three: abstraction, encapsulation, polymorphism, and inheritance. Let’s go through these a bit. By the way, these are more like stated values rather than unique benefits, as we’ll see. They’re not claiming to be exclusive to OOP, but rather they’re values you should prioritize if you’re doing OOP.
Abstraction is the idea of not depending on implementation details [The problem here: almost all abstraction are leaky]. Instead, you depend on an abstract idea of something rather than the specific details, like how bits and bytes are arranged in memory. Encapsulation is about preventing dependence on implementation details by splitting things into public and private. You’re only allowed to depend on what’s publicly exposed, not the private details. Polymorphism is about determining the implementation of something abstract based on its type. Practically every modern language has these concepts, even if they call them different things. For example, Go recently added generics (parametric polymorphism), and Go doesn’t even have classes, but you don’t need classes to have polymorphism.
This leaves us with inheritance, which is pretty uniquely OOP. You don’t find inheritance in functional languages unless they’re hybrid OOP/FP, and you don’t necessarily find it in procedural languages. There’s interface inheritance and implementation inheritance, but the one that’s really OOP-centric is implementation inheritance [As far I know, it is quite the opposite interface inheritance is really OOP-centric]. When I talk about inheritance here, I’m talking about implementation inheritance, which is essentially hierarchical code sharing. I’d contrast that with composition, which is non-hierarchical code sharing. You’ve probably heard the recommendation in the OOP world to “prefer composition over inheritance”, which is something that basically every language can do.
Let’s get a little more specific about what that means.
We can see why people might prefer composition over implementation inheritance. Martin Snyder gave me a concise example of why this rule exists. Let’s say you have a class with three methods that call each other. If you create a subclass and override one of those methods, the other two methods might still call the overridden method. The problem is, if you didn’t realize those other two methods were calling the one you overrode, you might have accidentally broken them. You didn’t even know they were doing that because it’s not explicitly part of the language’s semantics — it’s just something that can happen. This is a potential source of bugs, where you unintentionally cause issues by overriding superclass behavior.
This is a downside of implementation inheritance. In contrast, the composition approach avoids this problem. Instead of subclassing to override a method and add new functionality, you can create a new class that has the original class as a member. This way, you still have access to all the original behavior, but you’re not changing or overriding anything. All the methods of the original class remain intact, and there’s no chance of causing the bug described earlier. However, once you’re doing this, it’s essentially just nested structs in C. Every programming language can do this — functional, procedural, etc. At this point, we’re not really talking about a strength of OOP. In fact, you could argue that implementation inheritance is a downside of OOP, as it introduces potential pitfalls, and the recommended approach is to use composition, which is less error-prone.
Going back to the essence of the paradigms:
- Procedural programming is about using procedures instead of go-tos.
- Functional programming is like procedural but with the added caveat of avoiding mutation and side effects.
- Object-oriented programming can be broken into two categories:
- For Alan Kay, it’s about messaging and extreme late binding of all things.
- For Bjarne Stroustrup, it’s about hierarchical code sharing through implementation inheritance, which is increasingly disfavored in favor of composition.
Okay, so that concludes the longest section of the talk — differences in features. Now, let’s talk about the differences in style between object-oriented and procedural programming. It’s pretty hard to find someone explicitly saying, “Here’s how to do procedural programming style”. It doesn’t have the same level of hype as object-oriented programming, even all these years later. The best resource I’ve found is a YouTube video with 2 million views, which, while not an entire bookshelf at Borders, is still a lot of views for a programming talk.
In this talk, the speaker discusses the essence of procedural style, which has some things in common with functional programming. He emphasizes that procedural programming is about letting data just be data and actions just be actions. That’s the core idea — not organizing things hierarchically into classes and subclasses, but simply having data and procedures that operate on that data. If you want encapsulation, you can use modules. He goes through four examples in the talk, taking object-oriented code and rewriting it in a procedural style without changing the language.
One of the examples he uses is from Sandy Metz, who is an awesome human being and a complete authority on object-oriented programming — a true legend in the community. The code she refactors in her talk is a great example of good object-oriented code. In her example, she refactors code that initially has three classes (FTP Downloader, Patent Job, and Config) and nine methods. The refactored version has no classes, five procedures, and all the procedures are named after verbs. This ties into a blog post by Steve Yegge called Execution in the Kingdom of Nouns, which the speaker references. The post is mostly about Java, and Steve writes:
“Classes are really the only modeling tool Java provides you. So whenever a new idea occurs to you, you have to sculpt it, wrap it, or smash it until it becomes a thing — even if it began life as an action, a process, or any other non-thing concept.”
In other words, you might look at the object-oriented code and ask, “Why do we need a thing called FTP Downloader? Couldn’t we just write a procedure called download_over_ftp? Why do we need a Patent Job? Why not just have a function called process_patent?” Steve Yegge goes on to say:
“I’ve really come around to what Perl folks were telling me eight or nine years ago: ‘Dude, not everything is an object.’”
There’s definitely something to that idea. I remember back in the 1990s, when I first heard about Java after working with C++, thinking, “Oh, everything’s an object — that’s really nice and consistent.” But over time, it’s become clear that not everything needs to be an object, and procedural programming offers a simpler, more straightforward approach in many cases.
I liked the idea of everything being an object at first, but over time, I came to appreciate the perspective that not everything needs to be a noun. The “everything is a nail when you have a hammer” mindset can be limiting. In the object-oriented version, you have a class called FTPDownloader, but in the procedural rewrite, the speaker just says, “We’re not going to have an FTPDownloader class — we’re just going to have a procedure called ftp_download_file that downloads the file over FTP.” Instead of a PatentJob class, he has two procedures: process_patent and parse_patent. For Config, he doesn’t make a separate class because, in Ruby, you don’t have to — it’s just a piece of data. The point is, this approach is much more verb-oriented than noun-oriented, and in this case, it was a better fit for what they were doing.
The idea of hierarchical classification — classes, subclasses, and implementation inheritance — isn’t unique to OOP. Hierarchical classification appears all over the place, even in math. For example, in Lisp, there’s a numeric hierarchy. Some might argue that’s object-oriented, but most would say it’s functional or procedural. Similarly, in Haskell, you see hierarchies like monads, applicatives, foldables, and functors. Hierarchies definitely appear in programming languages, but they’re much more prominent in OOP than in procedural or functional programming.
How many people have gone through an object-oriented programming tutorial where you saw something like this: Dog inherits from Animal, Car inherits from Vehicle, and Bicycle does too? (Many hands go up). This is a really common example in OOP tutorials. You don’t see this kind of hierarchy in procedural or functional programming tutorials. The idea of hierarchy is essential to the Bjarne Stroustrup school of C++-based OOP, but it’s not a core part of procedural or functional styles.
For example, in Ruby’s standard library, you can see a complex class hierarchy. Within the Error class, you have FloatDomainError, which is a subclass of RangeError, which is a subclass of StandardError, which is a subclass of Exception, which is a subclass of Object. This kind of deep hierarchy is common in OOP but not in procedural or functional programming.
There’s one way to organize errors — you can certainly say, “This is a subclass of that, which is a subclass of that,” but it’s not the only way to do it. In Java, for example, the standard library uses a mix of interface inheritance and implementation inheritance. If we zoom in on TreeSet at the bottom of the hierarchy, it inherits from SortedSet, which inherits from Set, which inherits from Collection, which inherits from Iterable. In Rust, which isn’t object-oriented, you have something similar but without the hierarchy. Rust uses traits, and instead of a deep inheritance tree, you have a flat list of traits. For example, a BTreeSet in Rust implements multiple traits, but there’s no hierarchical structure — it’s just not the focus in Rust.
All of these examples are about abstract types. In Java, an interface like Shape is an abstract type. It doesn’t tell you how the shape is stored in memory — it just says, “If you give me a shape, I can ask it for its area, and it will return an integer [actually, it should be double]”. The same idea exists in Go, even though Go isn’t object-oriented. You can define an interface like Shape that says, “If I ask for the area, it gives me an integer [actually, it should be double]”. Rust does the same thing with traits, and Haskell does it with type classes. The syntax and names differ, but the core idea is the same: abstract types let you define behavior without specifying implementation.
Concrete types, on the other hand, are specific implementations. For example, a Rectangle is a concrete type with a specific implementation of area (width times height), and a Triangle has its own implementation. The key idea is that if you have a Shape, all you know is that you can ask for its area and get back an integer [actually, it should be double. One more note, the correct way to model this is not by polymorphism, but with sum type. See, for example sealed classed or case classes is Scala; Sealed classes and interfaces in Kotlin; JEP 409: Sealed Classes from Java 17 onward].
In statically typed languages, this is straightforward, but dynamically typed languages like Python handle it differently. In Python, you might define a Shape class with a method area that raises a NotImplementedError by default. Then, if you create a concrete class like Rectangle, you override area to return the actual area (width times height). This is the conventional style in Python, though I’m not a professional Python programmer — most of my Python experience is helping others with their homework.
You can do something similar in JavaScript using prototypal inheritance. For example, you might define a Shape function and add an area method to its prototype that throws an exception by default. Then, for a Rectangle, you override the area method to return width times height. You can also achieve the same abstraction in JavaScript without using objects at all. For example, you could use anonymous objects and functions to achieve the same result.
I’m not defining any classes or using prototypes here. Instead, area is a field on the record (or object) that’s an anonymous function. This function simply returns rec.width * rec.height. What is rec here? It’s just a variable from the outer scope. This is an example of a closure in JavaScript, where the function captures variables defined outside of it. In JavaScript, you can refer to variables defined outside the function, even if they’re defined right in the middle of the code. This allows me to write a JavaScript implementation where I can call answer = rec.area() just like before, but without using classes or prototypes at all.
This is an example of achieving the same data abstraction using a different style. I could do the same thing for a triangle: instead of rec.area, I’d have tri.area, and the calculation would be base * height / 2 instead of width * height. That’s totally fine. I can then generalize this and use the abstraction to write a function that takes in a “shape.” Notice that I haven’t defined what a shape is — there’s no Shape class, no interface, no prototype. I just wrote functions for rectangles and triangles, and now I can call shape.area() without knowing whether it’s a rectangle or a triangle. Both will work because I’ve abstracted over the concept of a shape, and I did it without using any object-oriented features. This is just another style you can use.
I could even take this a step further. Suppose I don’t have the language feature of closures, like in C, which doesn’t allow referring to variables outside the function. In that case, I can make it a little less convenient but still achieve the same abstraction. For example, I could pass the necessary data explicitly as arguments to the function. This shows that you don’t need object-oriented features to achieve data abstraction — it’s just a matter of style and the tools available in the language.
I can achieve the same abstraction by taking an explicit argument instead of relying on closures. For example, I can define a function that takes shape as an argument and calculates base * height / 2 or width * height depending on the shape. When I call it, I have to pass shape explicitly, like shape.area(shape), which might look a bit weird, but it works. I’ve still achieved the same level of abstraction — I don’t know anything about the shape other than it has an area method, and I can call that to get the correct result based on whether it’s a rectangle or a triangle. The fundamental abstraction still works; it’s just a question of how convenient or inconvenient the implementation is.
This highlights an important theme: programming paradigms aren’t about enabling things that were impossible before. They’re about how well certain styles are supported and how ergonomic they are. We’ve seen many ways to handle abstract data in functions or methods: interfaces, traits, type classes, closures, non-closure functions, and even function pointers in C. All of these achieve the idea of abstract types and passing abstract data between functions, but they come with different trade-offs in terms of ergonomics and style.
Putting it all together, language paradigms aren’t about what styles of programming are possible — Turing-complete languages mean anything is possible. It’s more about how well those styles are supported and what styles the ecosystem embraces. For example, if you’re calling shape.area(shape) in JavaScript, you’re probably going against the grain of the ecosystem, where most people use classes or prototypes. Similarly, if you’re trying to do something like overloading in C, you’re likely to find limited support unless you’re working on something very specific.
So, regardless of the features present in a language, you can choose to write in a different style, but the real question is: how well-supported is that style? How good are the ergonomics? And what’s the ecosystem like around it?
To wrap up, the procedural style I’m describing is about programming with less hierarchy. It’s about organizing code into plain old data and procedures, using modules for modularity instead of classes for encapsulation. Modules give you the same public/private separation as classes but without the hierarchical structure.
This brings us to the shortest part of the talk: what changed? Looking back at my time walking around Borders, seeing all those books about object-oriented programming, I remember the promises: OOP was going to make things so much better, reduce complexity, and make programs less brittle. But after spending decades in the industry, I’ve come to realize that while OOP has its strengths, it’s not the only way to solve problems. Procedural programming, with its focus on simplicity and less hierarchy, is making a comeback because it offers a different set of trade-offs that are sometimes better suited to modern programming challenges.
I feel a sense of disillusionment. Back then, object-oriented programming was really exciting, but now the shine has worn off. It doesn’t feel like this groundbreaking solution that’s going to solve all our problems anymore. A term I’ve heard a lot when discussing this talk is “broken promises” or “not living up to the hype.” For example, take a book like Clean Code. Who doesn’t want clean code? Nobody wants dirty code — we all want clean code. But then you think about things like UML diagrams. Was that really an improvement over procedural programming? Yikes, maybe not. Or abstract Singleton proxy factories — was that a step forward? I don’t know.
Now that we’re well past the honeymoon phase of OOP, the excitement has faded, and we’ve started to see some of the excesses or areas where things went too far. It’s natural to look back and ask, “Was this the right direction in the first place?” There are plenty of things about OOP that are nice, and there are good reasons why OOP languages are still widely used. It’s not like everyone is saying, “Throw it all out.” But there’s also a growing sense that maybe it wasn’t worth it. People are starting to revisit earlier ideas, especially now that we have modern conveniences like modules, which are mainstream today but weren’t back when OOP first emerged.
A lot of people are looking at code and saying, “I could put this in three classes named after nouns with nine methods, or I could just have five procedures named after verbs. Am I really missing out on a lot? Is this code going to be brittle and hard to maintain?” At a baseline, the procedural approach seems simpler — there’s less code, it’s easier to follow, and it doesn’t feel like it’s doing too much. Sure, OOP thought leaders say, “As your codebase grows, you’ll want this structure,” but many of us have worked in large OOP codebases and thought, “I still had those problems — I just had all this extra stuff on top of them.”
So, while OOP has its strengths, there’s a growing recognition that it’s not the only way — or even the best way — to solve every problem. Procedural programming, with its simplicity and focus on plain data and procedures, is making a comeback because it offers a different set of trade-offs that can be more effective in many cases.
So, disillusionment is a common theme when I talk to people about this topic. Especially when you consider that if you want to do things like abstraction, polymorphism, and encapsulation, there are many ways to achieve that. You don’t necessarily need classes, subclasses, and interfaces. For example, you can get encapsulation from modules, and we’ve seen lots of ways to handle abstract data without relying on OOP constructs. Composition over inheritance makes sense, and if that’s what I’m going to be doing anyway, do I even need inheritance? Inheritance feels like something I’m supposed to avoid, but in many cases, the libraries I’m using rely on it, so I can’t completely escape it. Is that a pro?
Are we still doing messaging and extreme late binding? It’s cool that that was Alan Kay’s vision, but it’s not really what people are doing in the OOP world today. When you put all these things together, it starts to feel like we’re not at the end of the OOP era yet, but maybe we’re past its peak. There’s a reason procedural programming is coming back: people are saying, “You know, things were a bit simpler and better in the old days.” This is a trend, and it’s why we’re seeing new programming languages explicitly saying, “We’re not going to do OOP. We’re going to stick with procedural programming.” They’re starting with C and trying to fix its problems, rather than going in the C++ direction.
You can see this trend not just in low-level systems languages but in other languages as well. A lot of them are moving in a functional programming direction, but the idea is the same: let’s do less hierarchy, let’s just have plain old functions and data, and we think that will make our lives better. It’s understandable.
So, what changed? Essentially, many of the selling points of object-oriented programming — like abstraction, polymorphism, and encapsulation — are now commonplace in other paradigms. Meanwhile, many of OOP’s signature features, like inheritance, message passing, and late binding, have fallen out of favor. People are moving toward static type checking, avoiding messaging, and preferring composition over inheritance. When you put it all together, if you’re going to stay in the imperative world, procedural programming is looking more appealing. It’s simpler, avoids some of the pitfalls of OOP, and aligns better with modern programming trends.
Procedural programming today offers many of the common selling points of OOP without the disfavored aspects like inheritance and messaging. If you’re on board with the idea of letting data just be data and actions just be actions, you can understand why we’re starting to see a return to procedural programming. Thanks very much!
Audience Question:
Hi, awesome talk! Thanks so much. Just a quick question: there’s a somewhat cynical view that class-based programming languages, like Java, force you into one style, whereas procedural and functional paradigms give you more freedom. This can lead to a lack of a unified style unless you enforce patterns. Could you speak to that in the context of this move to a new paradigm?
Answer:
The question is about whether languages like Java force you into a hierarchical, noun-based programming style, while procedural and functional paradigms offer more freedom. I don’t know if that’s necessarily cynical — it could be seen as either a good or bad thing. On one hand, a lot of people I’ve talked to say they liked OOP because it gave them a clear way to organize their programs around classes. You lose that strong push to collocate data with the operations that work on it when you move to procedural programming with modules.
It’s accurate that Java strongly pushes you in that hierarchical direction. I remember thinking it was cool in Java that everything is an object — except for primitives, which really bothered me. When I heard Ruby didn’t have that issue, I thought, “Cool.” But at the same time, it felt weird that even the program entry point, public static void main, had to be in a class. Why am I making a noun out of where the program starts? It still feels weird. So, there are pros and cons to that. I don’t think it’s all upside or all downside — it’s a trade-off.
Audience Comment:
It’s so weird to me that sparkling water comes in a can. I feel like, “Oh, time to hit the sauce after that talk.”
Audience Comment:
Speaking for myself, OOP was all about tying together data with the methods that work on it. I think that idea has persisted. If you look at Rust, you have a struct and an impl block. In Go, you can define functions with a receiver, which makes it look like you’re working with objects.
Response:
Totally, yeah. The idea of tying data and behavior together has persisted, even in languages that aren’t strictly OOP. Rust’s structs and impl blocks, or Go’s methods with receivers, show that the concept of organizing data and operations together is still valuable, even if it’s not done through traditional OOP constructs. It’s more about how the language supports and encourages certain styles, rather than being strictly tied to a paradigm.
The idea of coupling data with the operations that work on it, especially within a namespace and with public/private access control, has turned out to be a good idea. The interesting question, though, is whether you have to do it that way, or if it can be a convention that’s opt-in. For example, you could have modules that don’t strictly couple data and operations, or modules that operate on multiple data types at once.
Another question is: if you do organize your code that way, what does it imply? You mentioned Rust’s traits and impl blocks — even if you’re not using traits, you can still use impl to get method-like syntax. I do this all the time in Rust. But the implications of doing that in Rust are mostly about syntax and convenience. It looks like a method, and you don’t have to repeat the type in as many places — you can just use self, which refers to the type of the original thing. What you’re not getting, though, is inheritance. You’re not opting into subclassing or overriding behavior. That whole aspect is gone.
This seems to be a key difference between object-oriented languages and procedural/functional languages. In OOP, coupling data to the functions that operate on it often involves subclassing and overriding, whereas in procedural and functional languages, it’s more about organizing code without that hierarchical structure. But the fundamental idea — that organizing data and operations together is a good way to structure code — is pretty uncontroversial at this point. And credit to OOP for popularizing that idea.
Thanks very much!