In my computer class, a rotation of a graphic of Sonic the Hedgehog was attempted. The graphic was in a jpg format, and the method chosen was to write code in Java which read the graphic into a BufferedImage object. The idea is to copy the graphic into another BufferedImage object, reversing the order of pixel reading to obtain what amounts to a rotation in the new object. The image object is then written to a new JPG file. After the run, the new file is inspected.
This article is a bare-bones know-nothing introduction to how to think of reading and writing graphics that avoid the implementation details as much as possible. Documentation on the details of BufferedImage are plentiful on the internet, as well as information on computer graphics generally.
In a learning activity, students in grades 10 and 11 were asked to modify the statements:
int w2 = x;
int h2 = y;
such that (w2, h2) becomes the new placement for the pixel when the graphic is rotated. Students were instructed not to modify any other part of the program, which performed object declarations, file opening and closing, try/catch statements and other concepts considered too advanced for this stage of their course. Students were encouraged to use trial-and error and view the consequences of various general formulas they were to try out. width and height were already declared and given values in the code.
The tutorial below was given to the students.
In general terms, for any graphic of dimensions , can have any value in the domain of to , while can have values in the range of to .
Graphics are made of little units of colour information which end up on your computer screen as pixels. Think of each unit of information as belonging to a part of that graphic on a Cartesian coordinate plane. Much like the coordinate plane you learn about in grade 9 and 10, and represent the position of the pixel horizontally and vertically.
If the graphic is 100 units across by 200 units in the vertical direction, then the graphic is said to have dimensions , in terms of pixels. This means all pixels in a graphic will have some location . Because has to start from 0, it can take on values between 0 to 99 for this graphic, while can take on values between 0 and 199.
In our code, that means that the dimensions of the graphic is heightxwidth, and while and are fine (for the original graphic), a formula must be applied so that h2 and w2 (the and values for the rotated graphic) become appropriate for a rotation. With the statements written as they are, all you will get are two copies of the same graphic.
It is expected that the formula should be simple, but beware of values going beyond the range of the graphic dimensions. That will result in a crash. The dimensions of the graphic are given as heightxwidth, and it shouldn’t matter all that much what their real values are, you might want to insert printf statements where you can trace h2 and w2 against the values of height and width to see what is going on.
It is also noteworthy that you can only carry the Cartesian plane analogy so far. represents the pixel in the top left corner of the graphic, while for a graphic of dimensions hxw, the coordinate (w-1, h-1) will represent the pixel on the bottom right. The first difference is that no negative values are possible. The second difference is that the y-axis is upside-down, because it increases downward. The -axis is still fine, however, increasing from left to right as usual.
ONE OF THE few things you see on the web these days is how to do a really good magic square. There are many websites that tell you about how spiralling arrangements of sequential numbers on a square matrix is magic, but for me, that’s dull. You are limited to doing seemingly less than a dozen such magic squares, so I don’t find them too interesting.
Recall that magic squares are numbers arranged in a square matrix such that each of its rows and columns, and normally both diagonals add up to the same number. Usually, a square of n numbers to a side which has numbers total, will be populated with the entire set of numbers from 1 to n inclusive, in some quasi-random order. These numbers would be arranged in such a manner that the total of each of its rows, columns, and both diagonals equal the same “magic number”, which is different depending on the dimensions of the square. By using random methods suggested in this article, the number of magic squares possible, when n is odd, is equal to (n!)2.
For the 5×5 square, you apparently have to start by moving from the current position to the “top right” square (wrapping to the opposite edge if necessary), and if that square is occupied, move down by 1 square. This non-random, deterministic method apparently works for all squares greater than 5×5 (with odd dimensions).
I read from an old book on recreational math (The Fascination of Numbers, by W. J. Reichmann (1958)), that
Squares of even dimensions (4×4, 6×6) have to be arranged by a different algorithm than squares of odd prime dimension (5×5, 7×7, 11×11, …).
A randomly-generated 5×5 magic square can be made which uses the sum of two matrices.
The number of possible permutations of 5×5 matrices is equal to (5!)2.
Reichmann’s book was the only place where I could find such an algorithm. This seems to be a rare algorithm, even on an internet search. But it is the only method that leads to “magic” results in a variety of ways. These squares seem to be the most robust in terms of the number of ways their “magic” qualities can be determined. They have inspired my writing computer programs that generate such squares as a way of practicing programming several years ago. I have written magic square programs following Reichmann’s algorithm (not sure if he originated it) in VB5, Visual Basic .NET, VB for applications (in Excel), and in Microsoft Quick Basic 4.5. The 16-bit QB 4.5 version does not run on my 64-bit machine, and for similar reasons, neither does the VB5 version, whose runtime DLL is no longer supported by later versions of MS Windows.
In the next instalments, starting this coming Saturday, I will begin to discuss the making of 3×3 and 5×5 squares, and discuss their magic properties.
I had only read the novel 1984, but the tome that really influenced me more was George Orwell’s prior short essay written in 1945 entitled Politics and the English Language. You can google a PDF for yourself quite easily, or you can purchase one of many college-level readers used in composition courses which will contain the essay, probably with better formatting. My understanding is that it is not being used as often these days in courses on prose style or rhetoric.
The impression it had left on me in my early 20s was quite profound, and has influenced my writing to this day. Orwell’s message in this essay was quite simple: that one’s written expression should be free of tired, overused phrases that do the thinking for us. Such phrases and words to a large extent cloud our meaning. As a result, we fail to make our point effectively, or at all. His aim was to get us to express ourselves authentically, in everyday English, free of tired jargon, deadwood phrases, and other forms of unnecessary pretense that end up generating more smoke than light in terms of our self-expression. And, of course he encourages us to break any of his rules lest our use of language has to become even more awkward in the following of said rules.
The 1940s was a time of high-minded-sounding rhetoric. The Nazis were just defeated and fresh in everyone’s mind; Stalin was still the dictator of the USSR, and England and the rest of Europe were rebuilding and repairing themselves from the damages of World War II. Surely, political rhetoric, slogans, and catch phrases were not in short supply in Orwell’s day. Germany nearly invented propaganda; while England and America were quickly adopting their own brands of propaganda during and after the war. Noam Chomsky points out much later that propaganda has been felt to be necessary by the elites in power to generate “necessary illusions” and “emotionally potent oversimplifications”, so that consent of citizens to any new government policy can be manufactured on demand (hence the phrase Manufacturing Consent, also the title of one of Chomsky’s books, co-authored with Edward Herman).
Such was the influence of George Orwell that by 1992, Edward Herman wrote a book called Beyond Hypocrisy, which featured an extensive glossary which he called the Doublespeak Dictionary. So, to update Orwell’s laundry list of tired political phrases, here is a more recent sample of political phrases used to the point of losing their original meaning, cited by Herman: Antisemitism, Benign Neglect, Communism, Environmental Terrorists, Freedom Fighter, National Interest, Special Interest, and so on.
Orwell would agree with Vaclav Havel, who was quoted as saying: “Ideology is a specious way of relating to the world. It offers human beings the illusion of an identity, of dignity, and of morality, while making it easier for them to part with them.” This is because to facilitate the illusion of dignity and morality, you need language. Rather than using language as a means of authentically expressing one’s self, it is now used as a means of mind control. But mind control is a kind of double-edged sword in the sense that, while you give yourself identity, dignity, and a sense of morality and purpose, one can also blind one’s self to transgressions committed in its name. The reason is because the same language can be used in a way that sterilizes one against feelings of guilt when committing transgressions against others, particularly the perceived enemies of their cause.
Of course, the current decade has some of the greatest howlers of tortured English that I think we have seen yet: “speaking my truth”, “cancel culture”, “problematize”, “heteronormativity”, “womxn”, “latinx”, “intersectionality”, “privelege”, “shaming”, and of course, the big 4-letter word: “woke”. That is far from a complete list. I could continue: “lived experience”, “othering”, “platform”, “content provider”, “punching down”, “queering”, “spaces”, “they/them” as a singular pronoun, and “voices”.
All of them are just phrases that become overused as time goes on as ways of hiding and blurring clarity more than making meaning more clear. Phrases like “cancel culture” have been overused by members of all political stripes to the point where all life and meaning have been eviscerated from it. “Speaking my truth” is made more subjective by calling it “my truth”, and thus weakens and trivializes the word “truth”.
I particlularly find the recently-coined phrase “content provider” offensive. I am a writer of essays for this web-based journal. What writer feels any sense of dignity in calling themselves a “content provider”? It describes absolutely nothing about exactly what “content” is being “provided”. “Content” could refer to music, essays, news articles, videos, conspiracy theories, online stores or online pornography. There is nothing about the phrase “content provider” that distinguishes my writing from, say, E-Bay, YouTube or InfoWars.
Much of the rest of these phrases and words appear to be designed, not necessarily to make the world more open minded, but instead to further isolate the users of these very phrases from mainstream society, thereby defeating their own object and making further discussions into nothing more than an echo chamber where the message never properly gets outside of the closed circle of “woke” people. The stilted words appear designed to provide a barrier to entry for everyone else (since most people don’t know what “woke” people are talking about), which has the function of actively preventing their ideas from becoming mainstream. Language, which usually facilitates delivering a message, is now being used to prevent any hope of widespread adoption of ideas or actions in support of oppressed groups.
In nearly all cases, there is more than likely a word or phrase in everyday English that could convey one’s thoughts more clearly. And that has been my guiding principle throughout university. To free one’s self of all of these catch phrases is to make your thoughts your own, shorn of all pretense.
I wish to proceed with some criticisms as to why not everyone thinks as highly as I do about Orwell’s essay. To get a sense of the criticism, I will repeat Orwell’s six rules here:
Never use a metaphor, simile or other figure of speech which you are used to seeing in print.
Never use a long word where a short one will do.
If it is possible to cut a word out, always cut it out.
Never use the passive where you can use the active.
Never use a foreign phrase, a scientific word or a jargon word if you can think of an everyday English equivalent.
Break any of these rules sooner than say anything barbarous.
It is one of the most simple style guides around. It is a style guide for the modern times. But even with rule 6 in place, the rules are still considered rigid by many writers. I am sure to have broken rules 1 to 5 somewhere in this essay; and Orwell himself admits to breaking these rules in his Politics and the English Language. I think of this set of rules as an ideal, while knowing that I am likely to be accused, as other writers would be, of overusing rule 6. Yes, I break these rules, but if I stop myself, I would need to ask if I am about to say anything ridiculous by applying rules 1-5? Maybe, but probably not.
Rule 4 is controversial, since, while using active voice makes a passage more readable, it makes the person in the sentence the subject of it. “I went to the movies” makes me the subject (active voice), while “The movie was attended by me” makes the movie the subject instead (passive voice). The latter sounds pretty bad, and Orwell would have something like this in mind when he made that rule. The idea of an inanimate “thing” being the subject of a sentence might not sound right unless you really want to discuss that “thing”, and you really want to treat the person as incidental. This is the ideal for scientific writing, where there is an effort to discuss what is observed; no one is interested in the observer. Then there is rule 5: of course where the specific topic is within a specific scientific field, it is difficult to avoid scientific terminology that might sound strange to a lay person.
Others have their hair on fire because if you reduce English to a basic subset of basic words as Orwell suggests, then what hegemony does that play to, they would ask? I am not sure I follow this line of reasoning. From reading, Orwell was aiming at clarity throughout his essay. The entire point was not to silence people, but to enable them to discover their true voice, free of carelessly predigested words and phrases that all of us are prone to use from time to time. You can only become active against the current hegemony if you know what injustices you are fighting against, and can communicate this clearly to others, so that others may engage in the conversation more holistically. It prevents this kind of stuff:
Bottom line, what it will ultimately require to end all the tragedies we see unfolding around us is a round-up of the Luciferian “elites” – and their minions in government positions and all areas of private life – those who aspired to and who have engineered and are now peddling as fast as they can to accomplish the decline and fall of the United States of America – and other countries. In short, the “New World Order” crowd.
Patricia Robinett, Thought Crime Radio web log, June 2, 2022
This quote is from a far right-wing web log. The topic the author was writing about was the recent Uvalde, Texas school shooting at Robb Elementary School. Who does the author have in mind as the enemy here? The word “elites” is in quotation marks, so while these “elites” appear to have have connections to people in government, they also have connections to people in “all areas of private life”. So they could mean anybody. The author never offers proof of the existence of these “Luceferian elites”, nor proof of the existence of their “minions” which apparently can be seen everywhere. The “New World Order crowd”, whoever that is, is not helpful in clarifying who is being referred to, or how their identification has anything to do with getting government to support the gun lobby, which I think is the point of the article (protecting children from crazed gun-toting people is mentioned in passing, however). Clarity is an endangered species in this example. It didn’t help that Breitbart was cited as the source of their information.
Another criticism is that Orwell appears to reduce Fascism to problems of English usage. It kind of looks that way, and it sounds excessively reductionist on the part of Orwell, but consider that fascism with its attendant use of propaganda is nothing without mind control, and the only way into the minds of the masses is through a constant drum beat of language, images, and video. Using simple words is something anyone can do, and a deliberate application of Orwell’s rules disrupts one of the most important avenues that propagandists keep having access to. Orwell reminds us that propagandists only have access to our mind through language because we have chosen to allow it. Clearing our minds of jargon is important to knowing our own thoughts and in making them known to others. It is also an important part of intellectual self-defense against the deluge of propaganda we are all immersed in in our culture.
I was experimenting with Danny Dawson’s 4×4 magic square script, and began to consider writing my own script. But I just thought I would do a few runs for my own research. I wanted to thank Mr. Dawson for his fine work which I am obviously gaining knowledge from, but his comments page thought I was a spam bot, and rejected my comments. Oh well ….
The central topic of discussion here centers on building an algorithm for a computer program that can search for and find all or most squares, centering on initial construction. But the discussion on pairings work for magic squares generally.
Some number pairings work better on the 4×4 square than others. What I mean by number pairings are two sequential numbers being placed next to each other along a column or row of the magic square. Dawson’s script allows me to place any numbers on the magic square, and it would output all of the magic squares that fit the arrangement of numbers I suggested by filling in the rest. I only worked with two numbers, and it would suggest to me complete squares which work with that placement of a pair of numbers. Some number pairs resulted in more than one magic square, while other pairs gave no squares.
It would tell me that in a computer algorithm for such squares, if an “unsuccessful” pair of numbers show up next to each other in a magic square made by a brute force algorithm such as Dawson’s, the smart thing to do is to detect this situation and abandon the square’s construction, thus saving computer procesing time, an amount of time Dawson attested to as being potentially long, on the order of hours at best, to decades at worst.
My research was far from thorough, but I think I took into account most situations where the pairings would show up, barring left-right reflections, rotations, or up-down reflections of the same square. It is possible I may have missed some, due to clues that seemed to be left behind by some of the successful combinations. And I only considered pairings of sequential numbers, not just any pairings of numbers, of which there are (16 × 15)/2 = 120 combinations.
First, the combinations of sequential numbers that were NOT successful: “3 4”, “7 8”, “9 10”, “11 12” and “13 14”. Finding these pairs next to each other resulted in nothing output in all of the ways they might show up that I’ve tried. This is likely not the true situation, since when I tried “5 6”, only one unique square was found (and no others); while an attempt to try the famous Durer pun “1514” on two adjacent squares resulted in nothing until I moved the “15 14” to the centre columns of the top row. No other unique solutions were found for the Durer pun. None at all.
If a programmer were serious to find such “rare” solutions, then he or she would not consider ignoring these sequential pairs. On the other hand, if missing a half dozen or so squares is not important, then one is wise to look for these sequences in a row or column and abandon all such squares to save time, rather than making a square that is destined to fail.
In fact, it would be worth considering to check for these pairings before checking for “magic”, although both the pairings and magic can be done on-the-fly, during construction as a way of bailing out early and moving on.
All other successful pairings:
1 2, and 2 3 both gave generous numbers of squares
4 5: gave patterns reminiscent of the Durer square
5 6: only 1 square was found in my trials
6 7, 8 9, 10 11, 12 13: all gave generous numbers of squares
14 15: only worked if these numbers appeared in the middle columns of the top row (means that the bottom row and both left and right sides should work also; in addition, the “15 14” combination should work in the same way) (11 solutions)
15 16: Gave a few solutions, but only in the first and second cells of the first row, as far as I can tell.
Obviously, the five “failed” pairings given above are not the whole picture. Of the 120 possible pairs of numbers between 1 and 16 that can exist, there are obviously more pairs that would result in no square being formed, thus saving more time.
Jackdaws are primary source documents found in libraries, which are used in research or for class discussions. They are usually reproductions of iconic photos, letters, diary entries, and so on.
They can also be a species of bird, related to crows and ravens. I have often used (or misused) the word to mean “random books and documents that can’t be classed anywhere else”. Now I have to find another word for it.
The Ontario government has released The Sunshine List. It is a publically-avaialable list which lists the names, positions, and locations of any government employee earning over $100K per year, and was started in 1995 by the Mike Harris government as a way of naming and shaming those who commit the sin of earning above six figures. The article that appeared in today’s Toronto Star had a picture of an elementary school teacher and a classroom of young children, just below the headline, to suggest the targets of this list.
However, the list targets all 240,000 or so full-time government employees who get a paycheck from Queens Park, regardless of the sector of government invloved, such as Public Works, Healthcare, the ministries, OPG and the LCBO. And that just scratches the surface.
The 26 top wage earners working for school boards are those earning more than $250K. All of these people are school board directors, and the occasional associate director. When compared against the other sectors of government, the education sector is still the lowest-paid as they always have been. So it is no surprise that the sector called “School Boards”, according to the Sunshine List, are have the lowest average salary, for those earning above 100K.
The reality of such perceived largesse is twfold: the list which started in 1996 has become less impressive in its impact than it had been back then. $100K today has the same buying power as a salary of $69,769.70 back in 1996.
There is also taxation, which eats up $35,000 of your $100K gross earnings. The money you earn is not what you take home. And in 1996 dollars, the take-home pay of $65K can buy you what $45K used to back then. You can still live more or less comfortably and relatively debt-free on that salary, but it is far from lavish, especially if you live in the Greater Toronto Area because you won’t be able to afford a house or even a condo. An earner taking in $70,000 back in 1996 could buy a home in the GTA. Nowadays, an employee in the GTA earning $100,000 is lucky if they can find a two bedroom apartment that doesn’t break their bank account, especially if they are raising families.
Because of this, the magic number of $100,000 is outdated and much less meaningful than it used to be. It was a lot of money in 1996, but nowadays is barely above a living salary for a family of 4. It only looks big because of all the zeroes after the 1. To match the buying power of $100,000 in 1995, you would need to earn about $160,000 today.
The other aspect of this, is that the 85% of earners on the Sunshine List are earning between $100,000 and $110,000. 70% of earners on the Sunshine List are earning less than $105,000. That means that the per centage of earners just between $105K and $110K is barely 15% of the distribution. And as you go up in salary, the number of earners in each successive bracket falls like a rock. Also, keep in mind that the list isn’t giving you who is earning what, below $100,000. But because it takes a school teacher 10 years to get to that level, it is a safe bet that most Ontario government employees earn well below $100K, even in today’s dollars.
If we use $160,000 as the new cutoff (based on the same 1996 standard, adjusted for inflation), there are exactly 765 earners in Ontario working for school boards earning that either 160K or more, none of whom are teachers. That level of salary is generally earned by school board superintendents and the occasional principal. The 765 education sector earners is far fewer than the 80,434 sunshine earners working for school boards. There are many calls to update this list to take into account the change in standard of living of Sunshine earners, but as you can see number less than 1%, the list would not have nearly the same impact, nor cause anywhere near the same outcry.
And I have to say, why the outcry? We live in a world where Amazon workers are fired for being in the bathroom too long, thereby being a drain on Bezos’s ambition to buy himself another rocket. We live in a world where the average CEO earns more than 300 times more than the average worker under him. Government workers got where they were because of union activity, and out of the recognition that the boss wasn’t going to be nice one day and give us a living wage. The ones who don’t form unions get the shit jobs and shitty lives they duly fought for.
I realize I am being sardonic, but I am also suggesting that fighting for a living wage and adequate benefits is not easy, and is always a struggle, and bosses are hired to care more about profits than whether your skill set matches what earnings you deserve, whether you are taking home a living wage, or even your mental or physical health. Where is the outrage at the CEOs of private companies who earn so much off the backs of their employees? Or even at private companies who form government “partnerships” which benefit off the largesse of the taxpayers? These latter people are invisible on the Sunshine List.
People lose their minds when a government employee earns a living wage, but don’t seem to have a problem when a CEO reports a salary at a shareholders’ meeting in the billions of dollars, don’t know what to do with all that money, and buy themselves a rocket. Meanwhile their employees are so stressed they are unable to hold down a warehouse job for longer than a year or so, lest they be sacked for the crime of taking a bathroom break in an actual bathroom rather than peeing in a bottle like a good employee. This is what happens when you don’t fight for better working conditions.
To the left is a summary of salaries above 100K paid to all employees in the School Board sector of government. This encompasses all managers, custodial staff, secretaries, teachers, psychologists, other specialists, and board office employees right up to the director. Nearly everyone earns below 110K, with the number of earners in each successive bracket falling precipitously as you go up in salary level. With the full list sorted in order of salary, it is possible to determine the median salary for a School Board Sunshine List employee (remember, not all government employees) as being $103,129.16 or, in 1996 dollars, $65,411.73, using data provided by the Toronto Star to do the conversion.
For some years now, Windows and Ubuntu have been coexisting to a degree, if you enable the Linux subsystem on windows and download the Ubuntu for Windows package from the Windows App store.
It makes it possible to muck about with Windows drivers and the Windows kernel from within a UNIX environment. Even make your own drivers that can send direct commands to the Windows kernel and even the TCP/IP stack. So long as you like the command line, there are some pretty cool tools and languages, such as C/C++, python, perl, and many of the other usual suspects at your disposal. It doesn’t really have support for Java, except as a runtime envirnonment. That shouldn’t stop you from installing JDK manually. You can install the one for Linux or the one for MS-Windows. Your choice. Also, there is no support at all for X-Windows.
The /mnt directory is used to house all of the drive letters that are visible to Windows. Here they are mounted as folders each named after their drive letter.
I can’t run MS-Windows commands like Notepad from the shell; but it turns out the Windows paths are not set by default in Ubuntu. Typing /mnt/c/Windows/notepad.exe allowed it to run. In fact, it can run any windows command, if you take the time to fix the $PATH variable. In addition, the Ubuntu subsystem doesn’t yet support the reading of ext3 filesystems, although it has no problem reading NTFS filesystems. An EXT3 driver I tried was able to identify and mount EXT3 filesystems (assigning it a drive letter) from within Windows, but no files were visible. I was offered to format the drive, but I declined. So, I wasn’t sure of the rationale for even having this driver if I can’t see any files.
Apart from that, it appears as if the main architect of the ext2 driver project, Matt Wu, has abandoned the project and has reduced his website to a blank webpage. I don’t see any updates on SourceForge later than 2015.
Cygwin is a free (as in freedom) open-source suite which tries to be a POSIX-based subsystem that runs on top of MS-Windows. It tries to behave as if it can do all tasks that Windows can, as if it were a wrapper for Windows. But essentially, even with an X-Window manager, it ends up being just another windowed application with windowed apps running inside it, which can be minimized so you can actually work with MS-Windows itself when you want to.
I wish to say at the outset that this is more of a review than anything. There is a lot of important info missing to construe this discussion as a how-to manual for un-installing or fixing a Cygwin system after a Windows reinstallation. If it were such a manual, this article would have to be much, much longer. In reality, I am really just venting frustration as to how Cygwin, a “program” (for lack of a better word) which I have been using for over a decade, is still very far from getting its act together.
It has been a hobby of mine to make something of this subsystem for some years, and I have found it most useful as a programming environment. It has as much support for perl, python, C/C++ and vim as you like, and can even run windowed file managers, web browsers (among them, chromium, and lesser known ones like Opera and Midori), and editors like XEmacs. It has wide support of the standard window managers, such as GNOME, KDE, xfce, lxde, fvwm2, enlightenment, WindowMaker, right down to twm. And because all of this runs in a glorified window under MS-Windows, I can switch back and forth to and from MS-Windows whenever it suits me.
If Cygwin doesn’t have the packages for a “free” (as in beer) computer language, I found I can just install a Windows version of it under Cygwin, and that is fine. All Cygwin executables are “exe”, just like Windows, so I can also run Windows commands under a Bash shell. I wanted the latest Java from the Oracle website, and I found I was able to just unpack it somewhere, under, say, /opt, and link its executables to /bin or to any directory defined in my $PATH. Or, of course Java provides a “bin” directory which you can add to your $PATH without the need for making symbolic links.
All tickety-boo if you can get Cygwin up and running. Most applications ported from other unix systems will work if you recompile from source and run the configure script. Others will compile and install after some minor editing.
The downsides of Cygwin are apparent from the point of installation. When you first install Cygwin, the installer is somewhat cryptic, although you might be able to figure most of it out. The installer allows you to decide what packages you want and which ones you don’t. But it is really an illusion. If you want stuff such as your window managers to work on Cygwin, including your chosen X-Window manager, then just install everything. Maybe decide which window manager (or managers) you want and which ones you don’t want. I also was picky about the texmf language packs, which slow down the install to over 6 hours, so I do take the trouble to deselect most texmf language packs that are not English or which don’t use the “Latin” alphabet, while choosing any math or other academic fonts. Being otherwise indiscriminate about package selection means you have to live with scores (or possibly hundreds) of programs and window managers you will never care to use. My installation is typically about 26 gigs unpacked, spanning over 1 million files (1,017,806 files, to be exact) in some 65,000 folders. That is not counting my /home folder.
Another thing to know is that Cygwin has no uninstall tool to uninstall itself. So un-installation of Cygwin is infinitely more difficult than installation, for reasons we shall see.
I said earlier I found the installer cryptic. What I mean is that the installer has to download and install each package one at a time, which is a bugger if a remote server goes down or hangs. And, especially with those texmf language packs it appears to hang, when in reality it is just plain being slow. If you stop the installer and start it again, you get no indication of whether it remembered where it left off. What you do is behave as if it does remember, and click install. It does pick up from where it left off, but it is not very reassuring about it.
And God help you if you, for some reason need to reinstall MS-Windows. This is where you find when you go back to the directory with the installation, that it has introduced its own permissions, but that is not the worst permission problem. The ownership of the distribution becomes hard to untangle, in large part because when you reinstall MS-Windows the user and admin accounts you created become reduced to SID numbers of users no longer known to the system. And of course, you can remove those unknown users with Windows’ Properties, and reassert your ownership similarly, or by using the “takeown” and “icacls” commands which their cmd shell provides (running as Administrator). This takes hours when the number of files is over 1 million with 65,000 folders. This is slowed down further by the fact that there are several files and folders which have un-knowable permissions and un-knowable ownership which require you to change tactics when that happens. After an evening and the following morning, I was able to get rid of the now-bogus users while respecting other owners as much as possible. Using the inheritance option in MS-Windows has to be done judiciously, respecting that different folders have differing sets of system permissions. Some have no system permissions (just user and group permissions), while others have strange ones like “Creator Owner”, “Creator Group”, “None”, and “NULL SID”.
If you are able to untangle the Cygwin permission problems after re-installing MS-Windows, then congratulations! You are now at a point where you can decide two things: 1) you can still configure an icon to run a shell under mintty and content yourself with a bash shell as a reward for your work on permission changing; or 2) you can delete the entire Cygwin directory tree and decide if you want to reinstall again. Both prospects are hard-won, and here you are. I wish to emphasize that option 2 is not a joke. Deletion would have been impossible without taking ownership and fixing the permissions. It just sounds like a joke.
Notice that there is no choice “3” for trying to run X-Windows or a window manager. X-Windows will complain one way or another about not being able to find :0 (the root window), or will give an X window briefly, which crashes in seconds with no error message logged. Some X apps work, such as the aforementioned mintty, but except for shell commands, that’s it. If you wanted to run an application that needs any of the X-windows widgets, then you have to delete the whole thing (except possibly /home) and install from scratch. In other words you are basically screwed in all but the bleakest of ways if you reinstalled MS-Windows.
Over the years there had been several reasons for reinstalling. Sometimes it was to freshen a windows installation which was becoming increasingly sluggish and full of problems. “Freshening” a windows installation involves, for me, a formatting of C: drive. This is not so bad for me. Only programs and system files go on my C: drive. My documents and other files are on other physical hard drives. My Cygwin installation is also situated on one of the other physical drives, so it doesn’t take up valuable room on C:. So, I don’t feel as nervous about reinstallation as some would; but there is that darned Cygwin distro I have to reckon with sooner or later. You are screwed if you sort out the permissions under Cygwin, and screwed even more if you don’t.
As a post script, I found out how you get NULL SID, as well as incorrect ordering of permissions on many of the Cygwin files, the source of the majority of my permission headaches. The /etc/fstab has just one uncommented line, a filesystem called “none” allowing all of your Windows drives to be herded under /Cygdrive. This is supposed to have the advantage of allowing you to navigate to any physical drive or partition on your computer entirely within Cygwin (which is what it does). A missing option needs to be added: “noacl” (quotes omitted). This prevents Windows from trying to assign a user SID as if “none” was a user, thereby fixing many of the permission headaches.
I don’t understand why the designers of Cygwin don’t add “noacl” before they distribute it. I think the majority of us are running some form of NT-based windows system: Windows 2000, XP, 7, 8, 10, and now 11 are packaged with most computers these days, and their hard drives are usually NTFS. These bugs are specific to NTFS systems, and these bugs don’t show up on FAT-32 filesystems, which don’t store info on ACLs, SIDs, or anything of the sort.
The Cygwin website discusses these issues, but it seems that Cygwin is trying to be POSIX compliant when Windows obviously isn’t trying to be. If they are choosing MS-Windows as the host system, they will have to do things their way and not try to fight it with Cygwin’s “correct” way, or to disenfranchise the majority of their users for the sake of backwards compatability. Would it kill the owners of FAT-32 filesystems, whom I think are in the minority, to delete “noacl” for the sake of the majority? Once the system is installed it is too late to do it then, since by then the installed apps will all have the permission bug.
In the event you decide you wish to delete the whole shebang after hours of sorting out permissions, there is one little tiny file that completely thwarts nearly all attempts at deletion. I have found it on two of my installations, and it was a problem in that specific file. It is the file located at \usr\share\avogadro\crystals\zeolites\CON.cif, relative to the Cygwin top-level folder. It cannot be deleted, and its permissions and ownership cannot be changed or even known to humans. The reason Windows appears to go braindead with this file, is because of the filename. CON is a reserved word in MS-Windows, short for “console”, going back to the days of MS-DOS. So is naming your file LPT1, short for “line printer”, a Windows reserved word with the same MS-DOS heritage. You can’t delete it with anything in Windows, so you need a POSIX tool, like, ahem, Cygwin, to affect the deletion.
So I deleted CON.cif using my later installation of Cygwin, and I was thus able to delete the entire directory tree as a result. More to this issue is: what happens when you need to delete CON.cif and have no intention of reinstalling Cygwin? Stack Exchange has a whole discussion on this which makes my long story even longer, so I will end my article here.
Bell Media controls what appears to be the majority of the media market in the Greater Toronto Area.
Greater Toronto Area: First, they provide most people here with their internet. There seems to be not too small an amount of competition from companies like Rogers Communications, however, but I think that makes Toronto pretty much a two-company town when it comes to internet. Smaller players exist, but they are so small you hardly notice them. Toronto is hardly a bit player when compared to other North American markets, as it is North America’s fourth largest city.
And, allow me to list what Bell Media owns in Canada, and this is very visible in Toronto and surrounding area, known as the Greater Toronto Area: TV – City Pulse, much of CTV, BNN Bloomberg AM Radio: NewsTalk 1010, 680 News, Funny 820, the latter of which is the Canadian subsidiary of iHeartMedia (Wikipedia). iHeart Media is known for using Premiere Radio Networks to hire paid actors as callers on call-in radio talk shows to segue into planned stories or opinions. This is a common practice on many right-wing talk shows such as those hosted by Sean Hannity, Rush Limbaugh, and Glenn Beck, and has been known and quite openly admitted for some time. Here is a more in-depth report on it.
, or the circle constant, is a number that is irrational. Irrational numbers consist of infinitely many decimal places which never repeat. This assures that you can never convert it into an exact fraction. But is worse than that. It is also a transcendental number, meaning that it will never be the solution of a polynomial with integer coefficients and a finite number of terms. By the year 1400, the greatest advances in the accuracy of was made by the Chinese, who were able to work it out to 6 decimal places.
You can come up with a lower and upper bound for if you consider a circle of radius , and take the area of an inscribed polygon of sides whose corners touch the circles’ edge; and then the area of a circumscribed polygon, with the same number of sides, whose sides touch the circle’s edge. This was the tactic used by the Greek philosopher Archimedes about 2400 years ago, around 350 BC. Archimedes observed that the circle constant can be had by dividing the circle’s area by the radius squared . But to get a circle area of any accuracy you had to have an accurate value for . Since he could find the areas of polygons with much greater accuracy, Archimedes decided to circumscribe a polygon of n sides, whose sides touched the circle’s edge. But these areas were always too big. So, he also tried to find the area of the inscribed polygon of the same n sides. Of course, these areas were smaller than the circle’s area. He knew that lay between these areas, or rather that: , where represents the area of the inscribed polygon and represents the area of the circumscribed polygon. Another way of expressing this inequality may be more familiar to some: . In fact, for a unit circle, this gets really simple: , since for a unit circle.
So, to increase the accuracy of , all Archimedes had to do was increase the number of sides of the two polygons. You would achieve full accuracy for if the number of sides goes to infinity, where you would then have: . For over 2000 years, this was the holy grail for achieving to perfect accuracy. Using this tedious method, Archimedes calculated polygon areas of up to 96 sides, so he was able to say , or to at least estimate that was about equal to 3.14 (decimals were also not known to the Greeks, so 3.14 is a modern estimate). Remeber that this was before computers or even before the discovery of irrational numbers. Square roots were known in his day, so his own calculations would involve fractions and nested square roots, which didn’t necesarily have integer or rational solutions, so an expression like was left as it was. What was clear from his calculations, was that it was not possible to express as a rational number, or an exact ratio of whole numbers. The good news is, the estimation of 3.14 is good enough for quick calculations today.
But of course, there was frustration in not being able to resolve to a rational number, and so over the next 2000 years, across China, India, Persia, northern Africa and Europe, as we acquired more and more mathematical knowledge (including acquiring the decimal system), we could increase the number of sides of the polygon to a greater and greater number of sides. In France, François Viète used polygons of 393,216 sides in the year 1593, but could only achieve an accuracy of 10 decimals that way: 3.1415926536 (decimals were known by then). This would be a level of accuracy found on most inexpensive scientific calculators, and looked like a small prize given the number of sides in Viète’s polygon. But this was not state of the art in the early days of the Renaissance. Ludolph van Ceulen, a Dutchman in the late 1600s, who used a polygon of sides, or 4,611,686,018,427,387,904, or exceeding 4.6 quintillion sides. In these days before computers, or even electricity, this took van Ceulen 25 years as his lifetime achievement, but only generated to 35 decimals: 3.141 592 653 589 793 238 462 643 383 279 502 88. There were those who surpassed this as well, but not by a lot.
Sir Isaac Newton, or, what were you doing during the pandemic?
1666 was the year that Isaac Newton, a young Cambridge University undergrad student, was sitting at home, quarantining himself (as was everyone else) as the Bubonic Plague was raging through Europe. During his two years of social distancing, among his discoveries in optics, mechanics and Calculus was his method of computing the value of . For this, the 23 year-old Newton used integration (which he called “fluxions”) on a polynomial with rational coefficients. He acquired accuracy to 9 decimals with only a 12-term polynomial. This was achieved by combining the circle formula with the binomial theorem. The circle formula for a unit circle is:
Solving for y in terms of x, we get . It would be a good idea to find the area of the circle going from 0 to 1 (a quarter circle), so we would only need the “positive” version of this formula: . The integral would look like:
But because this only gets us the area of a quarter of the unit circle, we need to multiply the integral by 4 to get the exact answer :
While we have a good idea that this will get us , it doesn’t yet yield any decimals, since must be expanded into a polynomial. For that, you have to break some rules about the Binomial Theorem.
The binomial theorem applies to the expansions of binomials such as: , where it is normally understood that n is a whole number greater than or equal to 1 (or ). The coefficients of a general binomial expasion go as follows:
Squaring makes the expression . Its expansion looks like this:
Now turning this into , where we subtract instead of add, gives us a similar polynomial, but this time with alternating plus and minus signs:
So, the strange thing Newton did with this is that he broke the rule that be a natural number. Instead, for the purpose of finding the area of a unit circle (that is, , to as many decimals as possible), he had to go back to the formula for combinations, that is, to see what would happen if . The reason for this is because it fit in with the circle formula of .
, called “n-factorial”, is normally the whole number , multiplied by all of its integer predecessors down to 1, as in: . You stop multiplying when you reach 1. But what if ? For normal counting numbers, subtracting 1 gets you to the next lower whole number. At some point, you will reach 1 if you keep subtracting 1. But if , then subtracting 1 will always yield a fraction, and you never reach 1. In fact, you proceed forever into the negative numbers. Thus, a number like 1/2, when the formula is applied, leads to: going forever. Is this useful?
It turns out, it does have a use when placed in the formula for , because, for example, the first term is 1, due to . The next term has the coefficient:
So, after all that, if , then this coefficient becomes 1/2 so that the second term is . The third term has the coefficient:
So, for :
The first 12 terms in the infinite expansion are given in the above illustration at the start of this section.
How much of could we get “on the cheap” by performing integration on only the first 7 terms?
7 polynomial terms is all it takes to get the same accuracy Archimedes once had when he attempted to average out the areas of a 96-sided polygon both inscribed and circumscribed around a circle.
There is an even better calculus-based solution, which has greater accuracy in fewer terms. What you want to do is to calculate the area under the same curve over 0 to 1/2 instead of 0 to 1. Substituting allows each term to shrink faster, and thus to obtain accurate results with less work. Newton knew that on a unit circle centered at the origin, is actually . The y-value, by extension, is . The area he is calculating describes a sector of the circle from the perpendicular (this is 1/12th of the total area of the circle, and so its area is ). Added to this sector is a right triangle whose base is 1/2 and whose height is . The triangle would have an area of by the area formula . It would be useful to get only the sector and subtract out the right triangle from the integral. So we will illustrate the first 5 terms of what Newton did:
That is an accuracy within , which is impressive for a mere 5 terms. If you wanted more accuracy, just add more terms. Applying 7 terms as we did above would give us , or an accuacy of within .
Modern calculations reflect the power of computers rather than the power of human calculation
Not much progress was observed after that until the invention of computers. So by 1949, the invention of the ENIAC computer gave us 1,120 decimal places, taking 70 hours. By 1973, the CDC-7600, or Cray computer was the first device to give more than 1 million decimal places in 23.3 hours. August 1989 was when the 1 billion digit barrier was broken, using an IBM mainframe. became known to 1 trillion digits in 2002, by a Japanese team using a Hitachi supercomputer, after taking 25 days. Since then the orders of magnitude and computational time appeared to have plateaued somewhat. The latest world record, according to The Guiness Book of World Records, was in November 2020, when 50 trillion digits were found over a period of 8 months of computation. If printed in a normal sized font on letter-sized paper, it would require nearly 28 billion pages to contain the constant at 1800 characters per page. The supercomputer used 4 Intel Xeon processors (15 cores per CPU) running at 2.5GHz each, had 320 GB DDR3 RAM, and 336 terabytes across 60 hard drives, most of them used for computation.
As of 2021, a claim has been made of 62 trillion digits in a little over 3 months. The Swiss-based computer was able to find more digits, faster because of slightly more advanced hardware: Two AMD Epyc 7542 32-core processors at 2.9 GHz each, 1 terabyte of RAM, and 510 terabytes of storage across 38 hard drives, most used for swapping data with the RAM. Their “book” would be over 34 billion pages long if the digits were printed out. It was never made clear what kind of RAM was used. It is assumed they used DDR4.
The current computation of to ever more digits has become a quasi-annual event; an opportunity for computer companies to flex their technological muscle and promote their brand. These impressive calculations are not so much human achievements in the sense of Newton or van Ceulen, but are really achievements of computer engineers and software programmers.
But how many digits do we really need? Even to express the diameter of the universe to within a Planck length ( meters), we would only require 62 digits of .