The most “popular” programming languages since 1960

Hits: 145

There will be people that will dismiss any “popular” programming list as a kind of meaningless horse race. But that’s as long as you are not looking for job prospects as an answer to what will pay the bills and keep the lights on when you enter the work world. But the field is large enough that you have room to ask  yourself: what kind of programming do you want to do? Systems programming? Applications? Servers? Clients? Scientific models? Statistical studies? Device drivers? Everyone hears about web programming, since that is the most visible, and seems to get the most “airplay” in the media. It might even interest you. For others, it’s dull. There is so much more out there.

With that preamble, why am I bothering to still do this? It is to show how popular languages follow the ebb and flow of computing history. Since World War II, we had the ENIAC, a host of IBM and AT&T mainframes, followed by networked computers, then personal computers, then the internet, and so on. With each major shake-up, programming needs change.

Disk drums on an IBM 2314.

By 1965, what had changed preferences in computer languages, are the same things that change it today: changes in hardware, programming for mainframes versus “personal” computers (which in this decade amounted to comptuers like the PDP-1). In the 1960s, hard drives (which were called “disk drums” back then) were relatively new, as was magnetic tape. Transistors hadn’t quite made their heyday yet, with the some of the most powerful computers still using vacuum tubes.

1960

COBOL. 1960 saw the introduction of supercomputers in the service of business, and by far the most popular language was COBOL (COmmon Buisiness-Oriented Language). COBOL was an interpreted language, which meant it was capable of running on many different machines with a minimum number of changes. Today, by the end of 2022, over 80% of business code is still written in COBOL.

1965

The Olivetti programmable calculator, about the size of a small modern digital cash register, and among the first of its kind.

ALGOL. Algol-60 saw the first implementation of the QuickSort algorithm, invented by C. A. R. Hoare, while a student in Moscow. He was knighted by Queen Elizabeth II for his discovery. Algol was behind COBOL as the most popular programming language, but both were dwarfed by FORTRAN users.

Niklaus Wirth

FORTRAN. FORTRAN was far and away the most popular programming language by 1965, and stayed that way for some decades. It was taught in many “service” computer course taken by science students and most engineering students. It was known for having a rather elaborate mathematics capability.

Other languages popular during that period: Assembly, APL, BASIC and Lisp. 1969 was the year that PASCAL was first introduced, by Niklaus Wirth.

1970

1970 saw the invention of UNIX by Kernighan and Ritchie at AT&T Labs, and Pascal came on board as a teaching language for structured programming in many university freshman courses. Otherwise, the landscape was pretty much the same for programming languages in popular use as before.

1975

By 1975, C had grown in popularity, but was not a teaching language: BASIC, Pascal, and Lisp had all ascended in popularity as we had sent men on the moon, and more students became interested in computer programming. FORTRAN and COBOL were still at the top of the heap, while ALGOL, APL and Assembly moved down. Assembly would in future decades disappear from general popularity, but it would never truly go away.

1980

Enquire was a hypertext processing system first proposed at CERN Physics labs by Tim Berners-Lee in 1980. Ideas from Enquire would be later used to design the World-Wide Web.

By 1980, C++ had been introduced by Bjarne Stroustrup over the past couple of years, bringing the concept of object-oriented programming to the world. More and more people had mastered C, and it moved to the middle of the “top 10” proramming languages used that year. Pascal became a wildly more popular language due to the introduction of household desktop PCs, and the offering of a Turbo Pascal compiler by a software company called Borland. Microsoft offered BASIC and FORTRAN compilers that extended their stock QBASIC interpreter that came with DOS. In addition, Tandy, Commodore and Sinclair were offering their own machines, each with their own BASIC interpreters.

1985

While he didn’t invent the Internet (he never claimed that at all, according to Snopes.com), Al Gore tables bills and sources funding to greatly expand the internet, post 1989.

Bjarne Stroustrup publishes his seminal work The C++ Programming Language, in 1985. With the introduction of Windows and Windows NT, Microsoft expanded their programming offering to include Visual Studio, which included compilers for C and C++.  C was rising to the top of the charts, competing with Borland’s Pascal product. C would never leave the top 3 for another 15 years.

1990

MS Windows 3.0 first shipped in 1990. Also, Adobe ships Photoshop the same year. The World-wide web also gets its first exposure this year. By 1991, a computer science student Linus Torvalds uploads his first kernel source code to an FTP site, which a maintainer mis-spelled as “Linux”, a name which stuck.

Visual BASIC was introduced by Microsoft. C++ rose to the top 5. FORTRAN, BASIC, Assembly, and COBOL all fell to the bottom 5 of the top 10 languages. C had a wild surge in popularity, as the Internet was coming onstream, and the World-Wide Web was just starting in the universities. By 1992, the top 2 positions were occupied by C and C++. Also by 1992, a need for CGI scripting was needed for the fledgling W0rld-wide web, and Perl became popular.

1995

By 1995 Netscape had been out for 5 years. 1995 was the year that Microsoft first introduces Internet Explorer and gives it away for free, causing Netscape to go open source and produce Firefox.

There were many scripting languages at the time aimed at web browsers, but there had not been any set standard as to a default scripting language. By the end of the decade, that standard would go to JavaScript, a language developed since 1995. It and Perl were rising in popularity as client-side and server-side web-based languages respectively. But in the following 5-year period there was another shake-up. Java (a very different language from JavaScript), a product of Sun Microsystems, came from out of nowhere in 1995 to be the 3rd most popular language by 1996. By this time, the web had arrived in people’s homes and there was a need to enhance people’s internet experiences.

Pascal was falling out of favour as computers were moving away from DOS in the home and in business, and by 1997, Borland designed and object-oriented version of Pascal, which was called Delphi. It turned out to be a formidable competitor to Visual Basic. By 1998, even more server-side dynamic web programming was provided with the language PHP.

2000

2000 was the year that USB flash drives grew in popularity. In other news, Google makes its IPO in 2004; and in the same year we are first hearing about “web 2.0”.

PHP overtook Perl by 2000 as the 5th-most used language that year. Java and JavaScript had occupied 2nd and 3rd, pusing C++ to the #4 spot. C was still on top.  That year, Microsoft offered the world C#. Apart from C and C++, the top 5 langugaes were all web-based languages: Java, JavaScript and PHP. Perl was descending in popularity, as a new scripted language with much cleaner syntax became ascendant: Python.

2005

In 2005, IBM sells its PC division to a Chinese firm, to make it the largest manufacturer of PC computers in the world.

C was finally pushed out of the top spot by Java; and Delphi was starting to drop out of the picture as Borland had financial troubles after a failed bid to attempt to make inroads into Linux, with their introduction of Kylix. They sold off Delphi to Embracadero, who produces that product today. Perl continues to ascend in popularity only slowly, as its popularity is buoyed up by a legacy of libraries and its role in various bioinformatics projects, such as the Human Genome Project, conducted by universities around the world.

In part due to bioinformatics and other informatics endeavours, math and stats-based languages popped up such as Matlab and R. There were still new web-based languages like Ruby.

2010

At more than 1 petaflop (over 1 quadrillion calculations per second), the Tianhe 1 (released in 2010) is capable of running massive simulations and complex molecular studies. This year, IBM’s Watson wins a Jeopardy tournament.

Perl had finally dropped off the top-10, leaving a legacy of code on web servers all over the world. Objective-C became popular with Apple developers and new operating systems line NextStep, iOS and OS X. By 2011, the top 4 were: Java, JavaScript, Python, and PHP. Apple’s teaching language, Swift was at #9 in 2014.

2015

C and C++ were pushed out of the top 5. R, primarily a statistical programming language, rose to #7, second only to C. By 2019, Python was the top language programmers were using. Kotlin showed up briefly in 2019, owing to Google’s support of the language on the Android.

2020

Not much change, except for the introduction of Go, touted to be a more “reasonable” implementation of C++ with lighter syntax. Microsoft introduced TypeScript, a superset of JavaScript, and likely an attempt to “embrace and extend” it as they attempted to do the last time to Java, for example (J++ never caught on), or to JavaScript itself with their mildly successful VBScript, which also never quite caught on over the long haul.

While that was happening, Rust, which had been around for some time, enjoyed some popularity as a back-end web language, as well as a systems language. By the end of 2022, TypeScript has risen to the top 5. Of 11 languages that are the most popular, 7 are web-based languages: Python, JavaScript, TypeScript, PHP, Go, Rust, and Kotlin. The others are Java, C++, C, and C#.

Sunshine List 2021

Hits: 25

The Ontario government has released The Sunshine List. It is a publically-avaialable list which lists the names, positions, and locations of any government employee earning over $100K per year, and was started in 1995 by the Mike Harris government as a way of naming and shaming those who commit the sin of earning above six figures. The article that appeared in today’s Toronto Star had a picture of an elementary school teacher and a classroom of young children, just below the headline, to suggest the targets of this list.

However, the list targets all 240,000 or so full-time government employees who get a paycheck from Queens Park, regardless of the sector of government invloved, such as Public Works, Healthcare, the ministries, OPG and the LCBO. And that just scratches the surface.

The 26 top wage earners working for school boards are those earning more than $250K. All of these people are school board directors, and the occasional associate director. When compared against the other sectors of government, the education sector is still the lowest-paid as they always have been. So it is no surprise that the sector called “School Boards”, according to the Sunshine List, are have the lowest average salary, for those earning above 100K.

The reality of such perceived largesse is twfold: the list which started in 1996 has become less impressive in its impact than it had been back then. $100K today has the same buying power as a salary of $69,769.70 back in 1996.

There is also taxation, which eats up $35,000 of your $100K gross earnings. The money you earn is not what you take home. And in 1996 dollars, the take-home pay of $65K can buy you what $45K used to back then. You can still live more or less comfortably and relatively debt-free on that salary, but it is far from lavish, especially if you live in the Greater Toronto Area because you won’t be able to afford a house or even a condo. An earner taking in $70,000 back in 1996 could buy a home in the GTA. Nowadays, an employee in the GTA earning $100,000 is lucky if they can find a two bedroom apartment that doesn’t break their bank account, especially if they are raising families.

Because of this, the magic number of $100,000 is outdated and much less meaningful than it used to be. It was a lot of money in 1996, but nowadays is barely above a living salary for a family of 4. It only looks big because of all the zeroes after the 1. To match the buying power of $100,000 in 1995, you would need to earn about $160,000 today.

The other aspect of this, is that the 85% of earners on the Sunshine List are earning between $100,000 and $110,000. 70% of earners on the Sunshine List are earning less than $105,000. That means that the per centage of earners just between $105K and $110K is barely 15% of the distribution. And as you go up in salary, the number of earners in each successive bracket falls like a rock. Also, keep in mind that the list isn’t giving you who is earning what, below $100,000. But because it takes a school teacher 10 years to get to that level, it is a safe bet that most Ontario government employees earn well below $100K, even in today’s dollars.

If we use $160,000 as the new cutoff (based on the same 1996 standard, adjusted for inflation), there are exactly 765 earners in Ontario working for school boards earning that either 160K or more, none of whom are teachers. That level of salary is generally earned by school board superintendents and the occasional principal. The 765 education sector earners is far fewer than the 80,434 sunshine earners working for school boards. There are many calls to update this list to take into account the change in standard of living of Sunshine earners, but as you can see number less than 1%, the list would not have nearly the same impact, nor cause anywhere near the same outcry.

And I have to say, why the outcry? We live in a world where Amazon workers are fired for being in the bathroom too long, thereby being a drain on Bezos’s ambition to buy himself another rocket. We live in a world where the average CEO earns more than 300 times more than the average worker under him. Government workers got where they were because of union activity, and out of the recognition that the boss wasn’t going to be nice one day and give us a living wage. The ones who don’t form unions get the shit jobs and shitty lives they duly fought for.

I realize I am being sardonic, but I am also suggesting that fighting for a living wage and adequate benefits is not easy, and is always a struggle, and bosses are hired to care more about profits than whether your skill set matches what earnings you deserve, whether you are taking home a living wage, or even your mental or physical health. Where is the outrage at the CEOs of private companies who earn so much off the backs of their employees? Or even at private companies who form government “partnerships” which benefit off the largesse of the taxpayers? These latter people are invisible on the Sunshine List.

People lose their minds when a government employee earns a living wage, but don’t seem to have a problem when a CEO reports a salary at a shareholders’ meeting in the billions of dollars, don’t know what to do with all that money, and buy themselves a rocket. Meanwhile their employees are so stressed they are unable to hold down a warehouse job for longer than a year or so, lest they be sacked for the crime of taking a bathroom break in an actual bathroom rather than peeing in a bottle like a good employee. This is what happens when you don’t fight for better working conditions.

To the left is a summary of salaries above 100K paid to all employees in the School Board sector of government. This encompasses all managers, custodial staff, secretaries, teachers, psychologists, other specialists, and board office employees right up to the director. Nearly everyone earns below 110K, with the number of earners in each successive bracket falling precipitously as you go up in salary level. With the full list sorted in order of salary, it is possible to determine the median salary for a School Board Sunshine List employee (remember, not all government employees) as being $103,129.16 or, in 1996 dollars, $65,411.73, using data provided by the Toronto Star to do the conversion.

Below is a breakdown by government sector.

A Career Postmortem: Dr. Brian Wansink

Hits: 205

Dr. Brian Wansink. Photo courtesy of Wikimedia Commons.

Being formally trained as a Food Scientist in my undergrad years, I had heard about Wansink’s 2006 book Mindless Eating, and became an admirer after reading the book. Because I was a casual reader, I made no effort to “look under the hood” at any papers and studies he might have referred to, and took him at his word as a then-executive director of the Center for Nutrition Policy and Promotion for the US Department of Agriculture (USDA). He was responsible for overseeing the design of the 2010 Dietary Guidelines for Americans, as well as the government-run nutrition site “My Pyramid.gov”. He was also a long-time director of the Food and Brand Lab at Cornell University. With all that under his belt, why would I question what he writes?

The book Mindless Eating has inspired many to be more active and deliberate in managing their nutritional cues, and to take a deeper look into how humans are hard-wired in their perceptions of food. The real strategy would be to find ways to work around these hard-wired perceptions, rather than against them.

The ways he would run his experiments — mostly on college-aged subjects attending Cornell — was that he would offer free food (what college student wouldn’t be attracted by that?). Once you are hooked by the free food (and sometimes a movie), the science kicked in. Plates and food packaging would be weighed by difference in a way that the subject never knew it was being done. They would get a fairly accurate calorie count that way. Then they would ask you about your own perceptions: How much did you think you ate? How many calories did you think you consumed? Depending on what was being investigated, the results when fed back to the participants were often remarkable and surprising. Some of the perceptual tricks in the design of the experiments even fooled graduate students in Dietetics. He showed that these perceptual tricks can be as simple as changing the size of the plate.

Dr. Wansink seemed sly, and clever. But he had to be, because humans can sometimes be even more sly and clever in fooling themselves into thinking that they ate less than they did. The world clearly needed someone like Wansink to expose our human frailties to ourselves, and to show us how we fool ourselves into eating more than we planned to, or than we thought we did.

Two-Buck Chuck comes in many varieties, including red and white.

In Mindless Eating, among his many tales, he discusses people’s perceptions of their meal based on the perceived vintage of the wine they were served. The investigators purchased several cases of the cheapest wine possible, Charles Shaw Wine, nicknamed “2-Buck Chuck”, a wine sold at a chain store called Trader Joe’s in United States. At the time, Charles Shaw Wine could really be purchased for two dollars (USD). All bottles had their labels removed and replaced with a fictitious label suggesting it was from California, and another label suggesting the wine was from North Dakota, a state not known for making wine. The patrons given the various wines with their meals were asked to rate the food (not the wine) they were served and asked whether they would come back. The reaction was far more favourable if the label on the bottle suggested California wine. It was a bit of a sly trick, but at least the 117 diners in the study had a prix fixe all-you-can-eat gourmet meal set at $21.00 (USD), with free wine.

There was another story Wansink likes to talk about, about the bowl of tomato soup that was filled from the bottom using a food-grade feeding tube that was invisible to the participant. The tubing led to a 2-gallon pot containing the soup. The participant seemed oblivious to the bowl of soup that would never empty. The finding here is that people will eat on average 73% more soup than a normal serving if there is no visual cue to tell them to stop eating.  Our stomachs are indeed a very crude instrument for measuring how much we have eaten. We need visual cues, which can be interfered with by the bottomless bowl, but also by regular distractions. This experiment aimed to prove that. For this experimental design, Wansink received the IgNobel prize in Nutrition in 2007.

IgNobel prizes are awarded to scientists whose research makes people laugh, then makes people think. These prizes are awarded by the publication Annals of Improbable Research (AIR), and handed out at an annual ceremony held at Harvard University in Cambridge, Massachusetts, with lectures from the prizewinners being given across town at MIT.

Wansink showed how our perceptions of food quantity is vulnerable to lighting; the presence of company or entertainment or other distractions; the size of our plates; the shape of our drinking glasses; the proximity of junk food from where we happen to be sitting; and so on. All of it was compelling and often headline-grabbing. He has been on interviews about his findings from all 3 major American television networks over the years.

He was apparently able to prove his findings quantitatively, but any graduate students using his findings are now better apt to check his numbers. No one has accused him of fraudulent research, just sloppy research with statistical calculations that didn’t match up with other reported numbers. It began with a now-deleted blog post where, according to The Cut,

Wansink told the story of a Turkish Ph.D. student who came to work in his lab for free. “When she arrived,” he wrote, “I gave her a data set of a self-funded, failed study which had null results (it was a one month study in an all-you-can-eat Italian restaurant buffet where we had charged some people ½ as much as others). I said, ‘This cost us a lot of time and our own money to collect. There’s got to be something here we can salvage because it’s a cool (rich & unique) data set.’ I had three ideas for potential Plan B, C, & D directions (since Plan A had failed).”

Wansink wrote glowingly about the Ph.D. student, Ozge Sigirsci, and in her ability to see the offer of data as an opportunity and get herself published. And that she did. Five papers bylined both by Wansink and Sigirsci, came out of this “failed study”. To grad students reading the blog and wanting their own work published, this raised eyebrows. He was suggesting that it was just fine for a scientist to take a failed study, then massage the data for different null hypotheses until they come up with a correlation that falls outside of a 95% confidence interval, which rejects the null hypothesis (Ho). This is science done backwards. You usually pose the hypotheses before the experiment is run, not after. In other words, a scientist doesn’t run an experiment without knowing what they are researching beforehand.

The kind of statistical error being committed in these papers is known as a “Type M Error” (“M” stands for “Magnitude”). This is where just because you found a correlation with a 5% margin of error, the effect of this statistic might be exaggerated. Remember, this result was stumbled upon as a side effect of slicing and dicing the data until a correlation of “anything” emerged. In that context, how much information is your data giving you that rejects the Ho, which came as more of an afterthought?  It would be better to run a modified experiment to see if the same thing happens when you run the experiment deliberately.

In the blog, Wansink then listed the papers that were published and where they were published. This gave readers 5 key papers to be sceptical about. And there was a research team who did the checking. Tim van der Zee​, Jordan Anaya​, and Nicholas Brown looked into 4 of these 5 papers, and found 150 statistical errors. The error findings were based on inconsistencies in the published tables without looking at the raw data. To look at the raw data, a scientist normally needs to ask the scientist who ran that experiment. It didn’t help that after repeated requests, Wansink refused to share his data with van der Zee, et. al., to settle the matter.

Now, there is no rule saying that he has to share his data. But to paraphrase Andrew Gelman in the blog Statistical Modeling, Causal Inference, and Social Science, there is also no rule saying that anyone in the scientific community needs to take him seriously, either. The various journals have, since 2017 retracted at least 18 of his papers, according to Wikipedia. Another 15 have been formally corrected.

Stanford determined in September, 2018 that he had, according to Science Magazine from 21 September, 2018:

“In a statement issued [on the 20th of September], Cornell’s provost, Michael Kotlikoff, said the investigation had revealed “misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship.”

Wansink was removed from researching and teaching activities at Cornell, according to Science. Wansink also resigned after this statement was issued.