Hacking Linux

My special interest is computers. Let's talk geek here.
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Hacking Linux

Post by yogi »

I don't know if you have any experience with this problem, Gary, but I'm going to toss it at you in hopes that you can at least give me a clue. When it comes to Linux, you know, I am clueless. :mrgreen:

Because of the fiasco associated with trying to install Linux Mint onto a USB memory card, I ended up needing to reinstall the Linux OS's sitting on my SSD alongside Windows 10. It has to do with the bootloader being corrupt. My hope was that by reinstalling Ubuntu and it's associated GRUB bootloader the Windows efi boot partition would be cleansed and rebuilt. I was right, sort of. Installing Ubuntu wrote it's GRUB where I was hoping it would. Unfortunately, the other two Linux OS's, Kali and Mageia, did not see things as well as Ubuntu did. Thus, I had to reinstall them as well. Now the Windows bootloader works perfectly. One of the Linux OS's, Kali, however, is experiencing that infamous nVidia video card incompatibility issue.

When I initially installed Kali I had this conflict of video driver interests but was able to sort it out. Somehow I altered its GRUB script and booted into the OS. From there I did a normal install of the nVidia drivers and everything was peachy. I had to go for help at the Kali Tech Support website in order to become inspired. Those people are nice but just as clueless as I am. They told me to do what I already tried but that didn't work. It's a more or less standard fix in GRUB that normally would disable the video driver embedded in the Linux kernel. In fact, I did it once but it cannot be duplicated.

When this happens there are other ways to install the nVidia drivers. One is to download them from the nVidia website and install the binary they supply into a directory of the target system, Kali in this case. While attempting to do that I ran into a permissions problem because I was trying to move the driver from its download directory in Ubuntu to the Home directory of Kali. I tried this approach because that is what worked when I tried it with Linux Mint. In Kali I cannot change permissions (chmod) nor will it allow changing ownership (chown).

All this seems a little strange to me because the permissions should work the same in all distro's of Linux, doncha think? So, my question, if you can answer, is how do I move a file from one Linux OS to another Linux OS all installed on the same SSD? One caveat I could mention is that while it worked in Mint, it only worked on the Home directory. I could not alter any other directory permissions. What's going on? :think:
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Hacking Linux

Post by Kellemora »

Linux has three levels of security.
The User, The Administrator, and the Superuser called Root.
All Drivers are normally owned by Root.
So, in order to do something with a file that belongs to Root, you have to become Root yourself.

If you are in the Sudoers file, you can become Root by using your User or Administrator password.
If not, then you can go into Terminal, and instead of using Sudo, use SU, it will ask for your root password.

Most of today's Distro's only have you select two passwords, one as User and one as Administrator, which works for Root, by using Sudo or SU. Then too some Distro's like RedHat require you to use Sudo/Su, hi hi.

If something says you don't have permission to do that, it is because you don't own the file, Root does.
It's best not to change permissions of files, just work with or move them using the proper security level.
There are instances where permissions may get changed on a file, because of the location you downloaded them too.
For example: If you download into your Home Directory, the permissions might be changed to User, when the permissions need to be Root and in the Root directory somewhere. In this case you would want to change the permissions after it is in the Root directory or a Root directory folder, doing so while you are an Administrator with Root privileges.

Now that was a clear as mud in a pig sty wasn't it, hi hi.
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Hacking Linux

Post by yogi »

What you reported is very clear, indeed. I know of what you speak there. I spent quite a few years as system administrator executing permissions and assigning groups. I think I know how Linux handles permissions because it's like everybody else, even Windows. That's one thing Microsoft did not change. LOL

So, root does indeed own the target directory for the blacklist , i.e., /etc/modprobe.d because there are system related files therein. I have a file called blacklist-nvidia-nouveau.conf on my Ubuntu system. I want to put a copy of that into the /etc/madprobe.d directory of Kali. Access denied, as expected. I'm user-dennis and the folder/files belong to user=root. Also, as you would expect, I can't just go changing permissions in Kali when I'm logged into Ubuntu. The underlying problem here, and the reason I want that file in that directory, is because I cannot log into Kali. I'm desperately trying to log in so that I can attempt to install nVidia drivers to allow future logins.

I can't change permissions or owners in Kali unless I manage to log in some way, and because of the nVidia issue the desktop software will not load. It crashes the system. I can only conclude that it is not possible to move the file under those circumstances.


The standard LInux recovery modes that I've seen all have a GUI from which to select certain options. There is no GUI in the Kali recovery mode. It's all command line. Eventually I managed to start a tty session via the recovery mode. There were all kinds of interrupts on the screen because the kernel was trying to load a driver I told it not to. Basically it was ignoring my kernel commands which I inserted into GRUB. The bottom line is that I was able to use the terminal to get to the /home directory long enough to chmod 777 /home. Perfect.

I changed strategy a bit and decided not to blacklist nouveau. Instead I moved the nVidia binary into /home with the intent of installing the drivers from there. As happened in Linux Mint, the nVidia installer started and quit rather quickly. It told me that the nouveau (kernel resident) drivers would not change state. Kali, like Mint, did something in their free and open source creative programming to prevent the embedded driver from being shut down. Now, if you happen to be lucky enough to get the system operating, they will allow installation of said drivers. In fact the drivers are in their repository. The hitch is that the system won't boot until those drivers are installed. Catch-22 anyone?


Wile Mint is Ubuntu derived, Kali is Debian. Both are Linux, free ... and ... open ... source. The good developers who created these systems did a good job for most of the general population. Kali is intended to be use for penetration testing and not targeted at your average desktop user. So, I can understand the hardening and the mental attitude that went into its design. Mint, however is supposed to be a Microsoft killer. LOL LOL LOL Not in MY lifetime.
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Hacking Linux

Post by Kellemora »

The problem with nVidia drivers is not unique to Linux boxes, it is also plaguing Windows 10 and 7 users.
Here is just one of the folks trying to get an nVidia driver to work on Windows.
https://www.cnet.com/forums/discussions ... neric-one/

I've never looked at Kali, since I'm quite happy with Debian, and Linux Mint also.

Don't know if this is any help or not. But a few folks are using the Windows nVidia driver on Linux machines by placing the driver in an NDIS Wrapper. How that is done I have no idea.
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Hacking Linux

Post by yogi »

I agree that Linux is not the exclusive environment in which nVidia causes problems. It does happen elsewhere too. But, you can almost always count on a problem in Linux whereas it is not very common in Windows. I read that article and most of the comments to follow. It's quite a nightmare to be sure. My video card is the exact series as is the one the author owns. His model is GTX970 while mine is GTX960. The issue I'm having is that I cannot install the nVidia driver into Linux because the video driver that comes with the kernel pukes when it sees nVidia hardware. On those occasions where I did manage to install nVidia, it works very well. So, from my perspective the issue is with Linux kernels because I can always get nVidia to work when allowed to install it.
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Hacking Linux

Post by yogi »

Continuing the saga of Linux vs nVidia brought me to a conclusion. In other words, I finally accomplished the task of being able to boot Kali and install drivers to run my nViaia card. The solution was being hinted at all along in the regard that Kali would actually boot all the way up to the login screen. It allowed entry of name and password. As soon as the [Enter] key was pressed, however, things started to go amok. This actually happened to some degree with Mint, but there were other issues with that attempt. I speculated that since Kali can and does run nVidia elsewhere, the problem is not with the basic OS. It might be an issue with the Desktop. Since the Kali download page offers a wide variety of distros with several different desktops, I thought I'd try something other than what came as the default; KDE Plasma if I am guessing correctly.

I've read about the XFCE version being less than popular. It is a fine environment for many people, but it seems that there are more negative reviews than positive. So, that's the version I chose to download and attempt an installation. Aside from the iso being only a few MB smaller than the rest of the distros, it looked the same during the installation process. When the time came to test it out, I was delighted to see a foreign looking desktop called XFCE. One of the first things I do with new installs is update the software. More than 700 things needed updating and I was hesitant to do something that complicated. But, I did. And the install program crashed saying the nouveau video driver would not change state and that's the end of that. This is the error I was seeing while trying to bring up the KDE desktop, only it was occurring before the login verification.

The good news is that I was able to get to the XFCE desktop every time. I had to rebuild a few things with each crash of the installer, and I had to blacklist nouveau drivers manually, and I had to bugger up the kernel commands in GRUB, but then nVidia drivers finally installed from the Kali (Debian) repositories. At that point I tried to boot the other three operating systems also on that hard drive; it all worked fine.

Your words of support for Linux being way more configurable than Windows rang true during this entire process. I've even whined about Linux being too open and too easy to change. A lot of tech support forums were visited during this adventure and the only comfort I got there was that I'm not the only one having these problems. There is no single solution in any of the support forums. That's the really discouraging part about Linux distros. They come in many flavors. permutations, and combinations which makes it very difficult to troubleshoot. The really distressing fact is that you need more than the average amount of knowledge about operating systems in order to do the troubleshooting, not to mention what it takes to implement the patch and/or fix. That's not to say I've not had similar problems with Windows. Vista was a nightmare, but in the end I did what I have now done with Linux. I made it all work. Having been down both paths, Windows is the OS of choice for using a system that "just works." Linux is highly configurable but exceptionally difficult to maintain; it has taught me a lot. I learned mostly what I didn't need to know and should have been worked out by the developers.
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Hacking Linux

Post by Kellemora »

If I read this correctly, I think congratulations are in order. It sounds like you finally got it working, albeit minus the rest of your hair in the process, hi hi.

I've had a few problems when I first started using Linux again, but most of those were because I didn't understand how things worked. As you said, so much different than Windows.

More than once I've crashed a new install trying to get things to work and certain drivers installed, which only led to the black screen of death, and the only way I knew how to get out of it was reformat the drive and start over again with a clean install.

What irks me is a problem I still have when I boot up. I get an error message about needing to install R600 video drivers. Trouble is if I do that, I lose video and have to reinstall again. The reason being is the driver for the video is NOT in that R600 package, and their best choice from that package flat out does not work.

The helpful hanna's out there are no help either. They say oh it's simple, just SSH into the machine and turn off the driver.
To me that's like saying, oh real easy, just go to the moon and make a left turn. How do I get to the moon?

Although Debian itself is basically bare bones and you have to add everything yourself, I am familiar with it, and can usually find the answer to a problem. Not that the answer solves the problem, hi hi.
I do like the fact that they are now building separate areas for drivers that the kernel points to, so you don't have to rebuild a kernel. That alone caused me tons of grief, because an upgrade always installed a new kernel and I was back to square one again. Now I always load the latest kernel, but if it don't work, I reboot into the one that was working and delete the new one.
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Hacking Linux

Post by yogi »

You are correct. I solved the immediate and most pressing problem regarding Linux multi-booting into a Windows based machine. I now have Windows 10 installed on my laptop and can boot into Lunux Ubuntu, Kali, and Mageia. Not only that, they all use the nVidia video card built into my computer.


One challenge remains. I want to more of less duplicate what you told me is happening at the library. That library machine has Windows 10 as a base system and it has a way to boot into external storage. The external storage would in fact be a forth port to select on my system. This #4 port would have the ability to accept any OS previously installed on a USB removable memory device. I tried doing that once and consequently boogered up the Windows boot process. Thus I had to reinstall all the Linux distros and fight the classic battle with nVidia. I do have more experience now so that rebuilding the Windows box would not be as difficult. I know what needs to be done. But do I WANT to?


As long as I'm thinking about it, I want to comment further on that library machine. The guy you talked to used a program I am familiar with to install a Linux OS onto a memory stick. In fact I have a program from the same vendor that will install more than one OS onto a memory stick if you care to do that kind of thing.

There are at least two ways to install an OS onto removable memory. One is to do what that guy did, i.e. create a bootable iso image. This is pretty common in that those type of installations are how "live" disks are made. And, of course, you would use a live disk to install the OS onto some other machine. This bootable USB iso can have persistence added to it. Adding that partition allows you to save data created by that iso image. For all intents and purposes, it looks like a full function installation of Linux, but it's not. The disadvantage of doing all this is that the OS system files cannot be updated. If all you are doing is copying your homework, then who cares?

The second method of installing an OS onto a lremovable device is to create a full installation, not an image. In fact you would use that image created above to install the OS onto a USB memory stick instead of putting it onto a hard drive. Then the memory stick looks like a hard drive with an OS installed on it. The advantage of doing this is that the system files can be updated and there is no need for a separate partition for storage. All the storage can be done inside the fully installed OS because it is not an image. It's a working OS.

So, I'm thinking the guy in the library was fooling himself. He claims he could boot any OS from that Windows box by using a special port created by UEFI. Well yes, such a port can be used, but it is unnecessary if the computer is set up in BIOS to boot from USB; the one in the library may not be set up that way. You don't need a special port to boot up a "live" disk because the GRUB bootloader on the live USB stick can handle it all. Likewise, if you do a full install GRUB should manage the booting process. All this makes sense.

Now, the glitch. When I create a fully installed Linux OS on a USB memory stick, GRUB is not written to that memory stick. GRUB is written to the Windows efi directory where it is etched into stone. The next USB device I plug in will not be recognized because the wrong UUID is now permanently part of the Windows efi partition. I need to find out how to be certain GRUB gets written to the USB memory stick (selecting that option during installation does not do it), and then I need to know how to get the Windows bootloader to recognize it.

The guy in the library did not answer those questions.
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

R600 Video Package

Post by yogi »

Regarding your comments about a need to install a driver, I'm afraid I can offer little help. Whatever process in your system is asking for that driver obviously doesn't need it. So why is it giving you the message? My guess is that the wrong driver is installed. The obvious solution is to go to the manifacturer's website and hope you find the driver you need. You don't want a package of drivers. You want one that the OEM specified to work with your hardware.

There is also something called initramfs run during the boot process. This is just a list of all the things to check and turn on, and it is easy to corrupt. Thus it is necessary to rebuild it from time to time. Yours may be looking for something that has long been missing from the system files, but is still listen in the initramfs tables. So, the challenge for you would be to rebuild it. In Ubuntu the command is: sudo update-initramfs -u. Since you are running Debian and it doesn't use apt (probably) that won't work. However, initramfs can be rebuild in Debian; I just don't know the appropriate command line instruction to do it. There probably is an answer to how to rebuild it in some Debian forum. Your mission, should you choose to pursue it, is to find that forum. :lol:
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Hacking Linux

Post by Kellemora »

Hi Yogi - The problem with installing an OS on a memory stick is that it has to be recognized by the computer as an installation. This is done by adding it to the MBR or EFI. So if you remove the stick it leaves a hole in the MBR or EFI for a partition it cannot find. CRASH!

Whereas a bootable ISO does not require the computer to establish a bootstrap for it, other than boot from USB port.

I reboot so seldom, only after a power outage, that 5 to 10 seconds it says no R600 found is no big deal.
I think it recognizes I have a certain card the kernel thinks uses R600 but the driver for that card is not in there.
Since my graphics are excellent with whatever driver is running it, I'm a happy camper.
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Hacking Linux

Post by yogi »

When I feel ambitious I'll look into multi boot of USB devices again. When I started my research I found an article explaining what is involved and how to do it. I understood it and hesitated because it involved getting deeper into the Windows boot system than I wanted to at that time. That is the only article I ran across which suggests very few people are doing it. That's very unfortunate because it was a common practice and easy to do on the old lBOS/MBR systems.
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Hacking Linux

Post by Kellemora »

I may be wrong on this, but I thought I read somewhere that you can use the computers Bios to point to the EFI bootloader, and still have the secure boot in place. Since I wasn't working on doing something like that, I didn't read the whole article.

On a different topic. I just found out that ORNL is going to use pure RedHat Linux on the new supercomputer instead of pure GNU/Linux, which means they won't be using Debian based commands anymore. I think what they meant by that was they will not be using RedHat GNU/Linux, just the RedHat Linux which has a different command set.
So, apparently RedHat has added something to replace GNU? Like BSD did I guess?
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Hacking Linux

Post by yogi »

First of all, you probably know this already, but the BIOS you and I grew up with died and was buried when EFI booting took hold. UEFI can emulate BIOS and be made to look like the old firmware we know and love. Computers made since Windows 8, however, no longer use that original BIOS because Windows no longer will boot from that firmware. All the boot instructions were located on a Master Boot Record (MBR), be they the Windows bootloader or Linux Grub, or whatever else you had installed. Well, that MBR is no longer present. BIOS would merely detect the presence of a bootable device and hand off booting to it.

Keep in mind that MBR, among other things, refers to how the storage media is formatted. In the UEFI world the new disk formatting scheme is called GUID Partition Tables (GPT). If you recall one of my other threads I was belly aching about how installing a certain version of Linux altered the UUID's and messed up the booting of everything except Windows. UUID is exactly the same as GUID in terms of functioning. It means that each partition now has a unique identification number associated with it. EFI points to the UUID and hands off booting to that device.

One other point to mention is secure boot. That's an option and not a requirement to boot in an EFI environment. Securely booting means the OS handshakes with what is encrypted into the efi boot partition before EFI hands off booting. A successful handshake assures that nothing but that specific OS gets loaded. All versions of Windows have that code built in, but it's not necessary to use it in order to boot successfully. Not all Linux OS's have that code built in which is why they say to turn off the secure boot option in BIOS to prevent Linux from looking for something that does not exist.

So, to answer your question, the option to boot securely or not is a switch in the EFI (BIOS) firmware. It does not point to anything. It tells the EFI firmware to run a check for the presence of a security code prior to handing off booting. The encrypted code is in the efi partition. Original BIOS doesn't have that partition and doesn't look for it. UEFI requires it.

... and now for something completely different:

Linus Torvalds came up with the earliest version of the Linux kernel - it was called Minux or some such thing. And, of course, it didn't do much and needed a lot of helper programs to communicate with the world. Those helpers, or tools, became a standard set and were dubbed GNU which was adopted by the Free Software Foundation to do what was the obvious thing to do back then. So, essentially, the first Linux kernels came with GNU tools that were used to create subsequent operating systems. Apparently Debian and RedHat have their own versions of GNU tools, which may be run with or without the GNU tools. I don't know what ORNL is doing, but it would indeed be in their best interests not to use tools they don't need. It also would be serving their own interests best if they wrote their own infrastructure, but that takes a lot of time and expertise. So, maybe they are just using off the shelf tools from RedHat. Maybe, but I doubt it. It is a lot like BSD when you go around replacing the fundamental methods of interacting with the kernel. I can't believe anybody who would need a supercomputer would be happy with off the shelf products, but those folks at Oak RIdge might be special.
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Hacking Linux

Post by Kellemora »

On one of the tours I was on at ORNL, I asked how on earth they could address so many CPUs.
All they could say was a job was broken down into several pieces and each piece of the job uses a CPU.

IBM is an example where they use two different OS's - the CNK OS on the compute nodes, and Linux on the I/O nodes.
CNK only runs one CPU and user at a time, which is why they use Linux to assign programs to CNK.

Most super computers use a Linux kernel and their own OS's for various operations.
From what I understand RedHat Enterprise is the most scaleable and easiest OS to use on supercomputers.

If we change gears and look at Server Farms, they want to use a kernel and OS that does nearly everything they need it to do out of the box so to speak, with only minor modifications. This is why most Server Farms use a Linux kernel and the GNU intermediary OS, but may use other OSs on top of GNU for various functions.

I still don't understand the purpose of Video Cards on a computer with no video, except perhaps to add more memory and CPU or GPU modules to the system.
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Hacking Linux

Post by yogi »

You guessed right. Adding GPU's is frequently done only to tap into their processing power for parallel computing efficiency. If you have ten jobs to run, that can be done sequentially using one super fast CPU, or you can run each job in parallel on ten GPU's. They communicate over what is called the D-bus, which I believe is simply a means to transfer data between devices wired into the motherboard. Obviously you would complete those ten jobs in less time if you processed them all in parallel, and that's basically why people do it. That's also a justification for using an nVidia card: It can take the burden off the main CPU to process graphics. Thus you can have data crunching and graphics generation running simultaneously instead of sequentially. Of course it's still only one output from the CPU so that parallel processing is good only up to a point.
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Hacking Linux

Post by Kellemora »

There is one thing I don't understand about CPUs themselves.
We used to have single core CPUs, then dual core, then quad core, etc. have no idea how many cores they have today.

I often hear that most programs do not make use of the extra cores. Newer programs may address two or four, but that's about it.

Now here's the part that confuses me. Why would the program you are running have any say-so about how the computers hardware handles the task?
Seems to me, an internal operating system between the CPU and kernel, would handle all the hardware demands, perhaps like the GNU part of a Linux Distro.

The program you are running to do your work should just send a request to the internal controlling programs of the computer itself, and take the result back after the computer does it's part.
After all, isn't that what the kernel and internal OS is all about?

As usual, over my head, hi hi.
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Hacking Linux

Post by yogi »

Now here's the part that confuses me. Why would the program you are running have any say-so about how the computers hardware handles the task?
Short answer: Because the person writing the program determines how the processor is used.

Long Answer
: Sometimes it is difficult to separate the hardware concepts from the software concepts. An elementary computer consists of an input device, a processing unit, and an output device. To that you can add peripherals, but they are not essential to accomplish what a computer does. Information is input to a central processing unit which does some manipulation of that input. The results of that manipulation is sent to the output device. So, you can think of a computer as being three basic blocks. The purpose behind it all is to manipulate (process) the input which is sent to the output device.

These days it takes all kinds of software, firmware, and hardware to accomplish the simple task of manipulating data. The data can be stored, stacked, routed, and manipulated all before it is processed. This complicated process happens at a low level in every computer. Once the input is all sorted out, it is fed to the processor to do it's thing. Assume for this discussion that the processor can read the input and spit out the results at a rate of 1.2 Gbps. That number is commonly known as the data transfer rate. That's about max any processor can do because the laws of physics cannot be modified to make it any faster.

Looking inside a hypothetical processing chip itself we see it has 8 cores. What does this mean exactly? It means that it stacked eight processors on top of one another and put them all into one package. Thus, in theory, I could transfer data at a rate of 1.2x8, or 9.6Gbps through this processing chip. Furthermore, we see a data bus, which is commonly 64 bits wide these days. There is also an address bus which includes instructions to the processor and memory address locations. The input data is clocked into the processor while the instruction on the data bus is also clocked into a buffer. When the processor receives enough instruction, it takes the data and does something with it. That "something" can be done in any one of the eight cores.

With eight cores to feed and many instructions to distribute to the right core buffers and a few other things going on, that maximum data transfer rate turns out to be something like 4Gbps, or thereabouts. Specific tasks that are sent to the CPU might be scheduled (stacked) elsewhere on the motherboard, as you point out; in a GPU chip set for example. Regardless, it is the processing unit that sends it to the output device, i.e., monitor, printer, hard disk drive, etc.. And, it is the program developer that decides how to do it.

What I described up to this point is low level assembly language, not programming language. The program developer indirectly decides which processor core to use and what that core will do with the data via an API (which we talked about earlier in some other thread). In this case the developer doesn't need to know how it's done. He just instructs the system to do it. The programming language of choice will provide the developer with the tools to choose which and how many cores to use for his purposes. So, if you have a clever app developer, s/he will make use of all eight cores to complete several tasks simultaneously. Or, maybe only four cores. Or, maybe just one.

Clear as mud, eh?
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Hacking Linux

Post by Kellemora »

What you said makes perfect sense of course.

Thinking way back to the days when we had a CPU and a Math Co-processor.
If we used the same spreadsheet on two different identical computers, except one did not have the math co-processor, it ran painstakingly slow on the machine without the math co-processor. Yet the spreadsheet program was also identical.

Jump up to a newer computer that does not use math co-processors, using the same old spreadsheet program, that sucker ran like lightning on comparison. Obviously, the math co-processor was built into the CPU chips by that time.

I realize how computers work, although I have studied same, what I learned is way over my head, and I'm not kidding here either.

I didn't think the actual program you are running, like a spreadsheet program as an example, had to know how the underlying software and hardware did its thing, it merely needed to know what features of the underlying software it had to output the info to. In other words, a typical computer program does not send 0s and 1s, it sends it output in upper level code to a software program below it, that understands it, which then converts the info to an even lower level software program. Like a hierarchy of building blocks of processes so to speak, each one feeding the one below it.

I realize a CPU merely exercise the running of instructions, one instruction at a time, in order.

I've read this page several times, but it really does not show how a program talks to the computer, it simply shows input to the CPU which then handles almost all the work.
https://homepage.cs.uri.edu/faculty/wol ... ding04.htm

I guess I don't need to know how it does what it does, just that it does what I want it to do, most of the time anyhow, hi hi
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Hacking Linux

Post by yogi »

I thought I was long winded, but that article beats anything I could say about a relatively simple process. LOL

Well, you are on track when you think about CPU's dealing with only two states, ones and zeroes. A specific combination of those binary bits will turn on some transistors inside the chip, and turn off others. What's coming out of the chip is also ones and zeroes. That's about as low level as it gets.

When you download a Linux kernel, you have a package where the ones and zeroes are arranged into 64 bits per byte. A compiler arranged those bits into bytes and organized them so that this particular processor could react to the instructions contained therein. The compiler is designed to work with a specific processor, or family of processors. The human used machine language to assemble the instructions to the processor. For instance, when the developer types in the instruction "mov" a string of ones and zeros is created representing instructions to the processor to "move" something. We need the human readable part so that the logic for the string of instructions can be clearly understood by humans.

The developer who writes assembly programs is typically creating functions. A function is a set of commands that perform a specific task. The end result could be the addition of two numbers: fn add(a,b) for instance. This is a hard coded string of ones and zeroes that makes the processor move data from the input registers to the registers that will produce the sum of two numbers.

The next level up would be the programmer who writes code using those functions. S/he doesn't need to know how the processor works, but it helps. At that stage all the developer wants to do is add two numbers using the function called "add." It gets complicated from there.

In the practical world the programmer may want to read the output of the keyboard key press and store it in a given memory location so that it could be displayed on a monitor screen. I'll make something up here for an example: read #0099 >> C; stor C @00492011. You can almost tell what is going on by reading my made up code. The programming language de jour translates that instruction into a binary string, which in turn is fed to a compiler to translate it into something the CPU can do. The end result is an organized set of instructions, a program, that will perform a given task. It could be simple as a calculator or as complex as battling Godzilla in 3-D graphics over a network.

All along the development path the key to success is logic, Boolean logic. If you do this you can expect that. The goal of the programmer is to combine the this and that in a logical order for the CPU to execute. Learning a programming language is easy compared to learning how to think logically like a machine.

All these one's and zeroes are stored into memory which the processor can read. This is how human logic gets transferred to a machine that can make logical decisions. Many devices and blocks of electronics can help make this happen. In the end the logic and reasoning of a computer programmer is converted into a string of ones and zeroes. The CPU digests that data and sends its results to other machines that humans can interact with. Exactly what the hardware does and how it does it depends entirely on the knowledge of the programmers and the tools they are using. Obviously a good programmer needs to know more than just the programming language.

So, yes, each and every individual program you use has the logic and knowledge of the programmer embedded. You probably didn't need to know anything else I wrote here. :mrgreen:
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Hacking Linux

Post by Kellemora »

Eons ago, I did learn the Basic language well enough I could do things with the processor by using peeks, pokes, and calls.
Ha, that was many of my lifetimes ago too.
I did study low level formatting more out of curiosity than anything else, and actually wrote some binary code a couple of times to do something really simple. At the time, I thought this is exactly what programmers had to do, hi hi.

I looked at C and C++ a couple of times, but never could understand it well enough to use it for anything.

The sad thing is, even if you do learn a language, any language at any level, if you don't use it daily, you will forget it.
I'm in that boat right now. I've not had to write HTML5 now for about 4 years. The few changes I did make to my pages after that time were simply adding to existing code I already had in place.
Now I have something a little more major I have to do, and I'm looking at my own HTML5 code like it is a stranger to me.
I have no idea why or how I did the things I did, so for me, it is like starting all over from scratch again.

But I will figure it out again and persevere!
Post Reply