Ubuntu on Windows 10

For some years now, Windows and Ubuntu have been coexisting to a degree, if you enable the Linux subsystem on windows and download the Ubuntu for Windows package from the Windows App store.

It makes it possible to muck about with Windows drivers and the Windows kernel from within a UNIX environment. Even make your own drivers that can send direct commands to the Windows kernel and even the TCP/IP stack. So long as you like the command line, there are some pretty cool tools and languages, such as C/C++, python, perl, and many of the other usual suspects at your disposal. It doesn’t really have support for Java, except as a runtime envirnonment. That shouldn’t stop you from installing JDK manually. You can install the one for Linux or the one for MS-Windows. Your choice. Also, there is no support at all for X-Windows.

The /mnt directory is used to house all of the drive letters that are visible to Windows. Here they are mounted as folders each named after their drive letter.

I can’t run MS-Windows commands like Notepad from the shell; but it turns out the Windows paths are not set by default in Ubuntu. Typing /mnt/c/Windows/notepad.exe allowed it to run. In fact, it can run any windows command, if you take the time to fix the $PATH variable. In addition, the Ubuntu subsystem doesn’t yet support the reading of ext3 filesystems, although it has no problem reading NTFS filesystems. An EXT3 driver I tried was able to identify and mount EXT3 filesystems (assigning it a drive letter) from within Windows, but no files were visible. I was offered to format the drive, but I declined. So, I wasn’t sure of the rationale for even having this driver if I can’t see any files.

Apart from that, it appears as if the main architect of the ext2 driver project, Matt Wu, has abandoned the project and has reduced his website to a blank webpage. I don’t see any updates on SourceForge later than 2015.

Compiling The Linux Kernel Docs

In the last article, I said that compiling and installing source versions of software was akin to “going rogue”. I must confess that I have compiled from source and installed software that wasn’t in my distribution, most recently TexStudio, as being one of the larger projects, requiring tons of other libraries and whatnot to also be installed (or quite often, compiled from source on the side), since it wasn’t a part of the linux distro I was using at the time. It also wasn’t a part of Cygwin, and I compiled for that too. It was a great way to kill an afternoon.

But there was a time that I had compiled the kernel from source. It was necessary for me, as speed was an issue and I had slow hardware at the time. What I also had was a mixture of hardware pulled from different computers at different times. I researched specs on sound cards, network cards, video cards and the motherboard chipsets, and knew what specs to tweak on the kernel compilation dialogs, so I could get the kernel to do the right thing: which is to be fast and recognize all my hardware. I was doing this before the days of modules, with the version 1.x kernel. It worked, and it was noticeably faster than the stock kernels. X-Windows on my 80486 PC ran quite well with these compiled kernels, but was sluggish to the point of un-useable with a stock kernel running. Every few versions of the kernel, I would re-compile a new kernel for my PC, and pretty soon using the tcl/tk dialogs they had made things pretty easy, and I could answer all the questions from memory.

But then that all ended with version 2. Yes, I compiled a version 2 kernel from source, and yes, it ran OK. But it also had modules. The precompiled kernels were now stripped down and lean, and the modules would only be added as needed when the kernel auto-detected the presence of the appropriate hardware. After compiling a few times, I no longer saw the point from a performance standpoint, and today we are well into kernel version 5.3, and I haven’t compiled my own kernel for a very long time.

For the heck of it, I downloaded the 5.3 kernel, which uncompressed into nearly 1 gigabyte of source code. I studied the config options and the Makefile options, and saw that I could just run “make” to create only the documentation. So that’s what I did.

It created over 8,500 pages of documentation across dozens of PDF files. And 24 of them are zero-length PDFs, which presumably didn’t compile properly, otherwise the pagecount would have easily tipped the scales at 10,000. The pages were generated quickly, the 8,500 or more pages were generated with errors in about 3 minutes. The errors seemed to be manifest in the associated PDFs not showing up under the Documentation directory. I have a fast-ish processor, an Intel 4770k (a 4th generation i7 processor), which I never overclocked, running on what is now a fast-ish gaming motherboard (an ASUS Hero Maximus VI) with 32 gigs of fast-ish RAM. The compilation, even though it was only documentation, seemed to go screamingly fast on this computer, much faster than I was accustomed to (although I guess if I am using 80486’s and early Pentiums as a comparison …). The generated output to standard error of the LaTeX compilation was a veritable blur of underfull hbox’es and page numbers.

For the record, the pagecount was generated using the following code:

#! /bin/bash
list=`ls *.pdf`
tot=0
for i in $list ; do
        # if the PDF is of non-zero length then ...
        if [ -s "${i}" ] ; then 
                j=`pdfinfo ${i} | grep ^Pages`
                j=`awk '{gsub("Pages:", "");print}' <<< ${j}`
                # give a pagecount/filename/running total
                echo ${j}	    ${i}    ${tot}
                # tally up the total so far
                tot=$(($tot + $j))
        fi
done

echo Total page count: ${tot}