Awesome Computer

My special interest is computers. Let's talk geek here.
Post Reply
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Awesome Computer

Post by yogi »

I've been busy the past few days building the most awesome computer ever. Just in case you are interested, I'll describe it to you in some detail. If all you want is to see the pictures, then scroll on down to the bottom. Otherwise, here is the geeky lowdown.

MOTHERBOARD: ASUS Republic of Gamers Maximus VII Ranger Z97. i never used the Z97 chip set before and I have yet to see if my favorite flavor of Linux (Ubuntu) has drivers for it.
µProcessor: Intel Core i7-4790K Processor. This cpu has 4 cores (8 Threads) and will normally run at 4GHz clock speed. It can be overclocked to 4.4GHz and I talked to somebody who pushed his to 5GHz.
CPU Fan: Cooler Master - Hyper 212 EVO. I didn't use the standard Intel supplied fan because it's always been noisy. The fan in the upper middle of the motherboard in the pictures is the Cooler Master.
RAM: Kingston HyperX Savage RAM – 16GB / 2400MHz. I've not seen it fly at top speed yet. So far it is perfectly happy at 1600MHz clocking.
PSU: Corsair RM Series™ RM650 — 650 Watt 80 PLUS®. I had some doubts if 650Watts would be enough power, but it exceeds the total power consumption of the entire system. It's very easy to swap out if I need to get a bigger one.
VIDEO ADAPTER CARD: EVGA GeForce GTX 960 SuperSC ACX 2.0+. This is not the top of the line from nVidia, but then it didn't cost as much as the entire computer either.
STORAGE: The SSD, HDD, and DVD are from my old system. The SSD has 500GB and the HDD has 340GB capacity. I'm planning on adding another old HDD of the same kind today.
CASE: Corsair Obsidian Series 650D. This one is probably meant to be configured for a server, but I liked the quiet running fans and all the expansion slots for HDD's. It gives me room to grow.

There are so many awesome things about this system that it is difficult to list them all here. Besides, I haven't found them all or tried them out. The case was the most impressive part of the system with all those cut outs and grommets to route cables out of sight. ASUS did a marvelous job of documenting their motherboard and how to connect it. That's one reason I went with this type board. The BIOS is like an operating system onto itself complete with mouse useability in the interface.

Windows rated my system performance as 7.8 on a scale of 1 to 7.9. They say the processor is slower than ideal in one or two instances. Right, as if they can keep up with my hardware. The possible downside is that ASUS does not support their motherboard for use with Linux. It's UEFI and BIOS and designed for modern Windows operating systems. I've read where others have had success booting into Ubuntu, and that is what I'll be trying to do today. I'll be doing that on the hot swappable drive bay that is built into the cabinet.

I know what you are thinking, and the answer is no, I do not NEED this kind of computer. However, it's the first time in my life that I could actually have one like it.

Image

Image
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Awesome Computer

Post by Kellemora »

Hi Yogi

I saw your mention of it on Farcebook yesterday.
VERY COOL Machine indeed!

Not being able to run Linux on the first motherboard I got is why I had to take my brand new computer I bought after the lightning storm back and wait for them to order another motherboard.
FWIW: My mobo is an ASUS F2A85-M2, similar to the one mentioned on the Farcebook comments under your post.

I've installed Ubuntu, Mint, Debian 7 and the new Debian 8 on it, along with Windows XP-Pro, no problems with Linux on this particular mobo. It has UEFI and EFI, if there is a difference. And GRUB will use both normal boot and with EFI boot in fallback mode.
I went with the AMD A4 6320 CPU 4.0 ghz, because it is much cheaper than Intel CPUs.

The only driver issue I had was the Graphic and Video driver. This particular motherboard uses Radeon instead of Nvidea.
I have the right Driver and installed it right after I got the machine. But then a kernel update killed everything and I ended up with a black screen and I'm not smart enough to figure out how to get out of it. Grub did not have the last kernel in the list to fall back on like on my other machines. It still doesn't, I should fix that so I can reinstall the right video driver.
The new Debian 8 does have the right driver in the kernel package, but so far, I cannot get Debian 8 to do much of anything. Things I need and use each day are removed and another method used in it's place. I spent like three hours trying to find out how to add to panel with no success. But without my workspace boxes spread out on the bottom panel, the extra two steps of changing screens is not worth the hassle. I think Debian 8 was released way too soon, as it is not at all user friendly by any stretch of the imagination.

Back to your machine:
You seem to have a LOT of room inside.
I have one machine which is a commercial class machine, and the heat sink and fan on the CPU takes up half the box.
I really like that Corsair Case, neat and clean. I'm sure your power supply is plenty big. Most of mine, until the latest, had only 350 to 450 watts, unless I replaced the power supply then jumped up to 500 or 600 watts. I think my new one has a 650 watt now too. Unless you do a lot of gaming or heavy graphics work, like making videos 650 should be plenty.
Keeping the CPU cool is the hard part. My step-son went with a water cooled system and even it gets hot with his online intensive games.

The thing that killed me about these new mobo's is they are all SATA with no way to use IDE drives.
I did pick up some external hard drive cases, but don't use the case itself, just the USB connector and parts from inside to swap different HDs to store stuff on. I did have an external IDE connector and power plug sticking out of my older computer, but had to shut it down before adding or removing a drive. So using the parts from a USB external case did the trick. Just wish I had about six of them, hi hi...
I would have around 4 terrabytes of storage. But then, I could buy a pair of 2 terrabyte SATA drives for about 99 bucks each these days, I think. My pair of 1 terrabyte drives were only 89 bucks each three years ago.
I always buy in pairs, so I know I have enough room to mirror one to the other. One in my office, the other down in the house. They were originally bought to back up my 2 terrabyte NAS which the lightning destroyed. Good thing I had backups of it.

I'm spending my evenings after work, working on my website. Right now I'm redoing the index.html page, coming along fairly well. I get frustrated when something don't work and I spend hours trying to figure out why. Then want to kick myself in the behind when I finally discover it was because of an extra </div> or an </li> I didn't remove due to a nested <ul> following. I usually check W3C validator after moving each block of code and changing it around. But often this does not help find what the problem is.
I learned to use <section> instead of <div> but this causes even more problems. You get a Warning if you don't use an h1 or h2 following a <section>, but you also get a warning if you use <section> with h1 further down the page too. I thought it was because I modified the h1 in the css so lost hours trying to fix this complain from W3C. The fix was simple! Get rid of <section> and use <div> and the error went away, and I could modify the h1 appearance.

You have a really cool machine there Yogi, I hope it lives up to your expectations.
I also hope the mobo you selected is not one locked for Windows ONLY, as so many are these days.
But I learned ASUS makes both, Locked for Windows ONLY, and ones you can install any OS on, while still keeping the UEFI for the Windows partition working as it should. In other words, like on my board, you don't have to turn UEFI off, the bios automatically use it based on what OS you installed. Maybe this is why Debian has both normal boot and EFI fallback boot? Over my head!

I'm jealous, hi hi...

TTUL
Gary
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Awesome Computer

Post by yogi »

ASUS is guilty of providing too many options on this motherboard, if they are guilty of anything. It's going to take me a long time to become familiar with BIOS, but last night I did manage to set up multi-booting of sorts. Before I did that I had to solve an old Windows problem that I was able to ignore for years. The day Vista came out I bought a new custom built computer system and learned about Microsoft's mistake the hard way. Eventually I got Vista to run pretty well, but it literally took years to do. The first day Windows 7 was introduced, I got a full license for that and another hard drive to install it. As Windows is want to do, it merges all their OS's into one package. This is transparent until you want to do what I tried to do.

There is something called Boot Configuration Data (BCD) which is simply a store of all the Windows recognized programs on your computer. That is the list of choices you see when you boot. Well, there is only one copy of that BCD file and it is stored on the first OS to be installed. So, when I made the dual boot for Windows the default was to boot into Windows 7, but the list of choices, the BCD store, was stored on the Vista hard drive. This was annoying but since Microsoft had no trouble keeping it all together, I didn't think it was significant enough for me to worry about.

Then I added another hard drive with Linux on it. Well, Linux surprised the daylights out of me because they are even more invasive than Windows. By that I mean that Grub became the default boot environment. Grub replaced the Windows BCD, which worked fine, but was not what I wanted. I wanted to keep Linux and Windows separate and apart. Eventually I figured out that Grub had to be stored on the Linux drive and the MBR rebuilt to call the Windows bootloader. In order to accomplish this I had to make sure BIOS called up the Windows hard drive first. It was quite a learning experience, but I did it and it worked for me for several years.

Skipping forward to my latest ASUS motherboard system, I wanted to be sure that I could duplicate what I had before I bought into this new way of doing things. ASUS explicitly states that it does not support Linux, but there are many stories in their forums of people who dual boot Ubuntu with Windows. Of course they are in the forums because they have problems, but I was convinced it could be done. I assembled the new computer with a single hard drive and Windows 7. It would not boot. Then I remembered the BCD problem and added the Vista hard drive. Volila! Dual boot city. I was able to get Windows 7 working but when I tried Vista, it crashed. Apparently my new motherboard cannot deal with a 32-bit operating system. This Vista OS was the last one I had based on 32 bits. Well, actually, I was able to run Vista after a clean install, but nothing would upgrade. My mobo was all 64-bit as far as driver software is concerned, and that eliminated Vista from doing anything beyond what was on the install disk.

The problem of the BCD file now had to be resolved. I managed to rebuild the BCD file so that it now sits on the Windows 7 disk. The menu still lists Vista as a choice even though the disk has been removed from the system. I thought I rebuilt the BCD, but apparently all I did was move it. There is a fix for that too if I use a special BCD editor, which I will do in the near future. So ... then I partitioned my Linux drive into three pieces. Two ext4 partitions and one swap. I installed Ubuntu 14.04 LTS on one partition, and Ubuntu Mate on the other. Ubuntu Mate did exactly what Linux did the very first time I installed Ubunto: it not only rewrote the MBR but hid the Windows disk from the boot menu. So here is a case where Linux prevents Windows from booting. Back on square one, I booted into the LTS version and fixed Grub so it was stored on the Linux disk where it belongs. I went to the Windows disk and restored the normal Windows boot there. It still shows Vista by the way, but now the boot loaders are separate and I can use BIOS to select which system I want to boot into.

This morning I updated Ubuntu LTS and I get all the way up to the log in screen. I log in and then it crashes sending me back to the log in screen. The update obviously changed the kernel and blew away any stability I thought I had. Hopefully Grub will give me a fallback to the old kernel so that I can install the driver into the new kernel. While this is all a perverted kind of fun, I never, repeat NEVER, had to go through all this incompatible video driver stuff or recompile a kernel in Windows.
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Awesome Computer

Post by Kellemora »

Hi Yogi

You are way over my head here.
I do know I have to install Windows FIRST, then add Linux later to prevent problems with Windows booting.
But as you discovered, put GRUB on the Linux partition, and let it find the Windows OS.
Then grub will send the boot sequence to the original Windows mbr to boot Windows.

Or like you have discovered, on these newer UEFI machines, allow the Bios to select which HD to boot from.
On my machine, from a cold boot, it will boot into Debian 7, unless I press the F12 key during power-on.
Then it will ask which HD, or Partition to boot from.
I don't normally have to do this if I let Grub load and boot Windows from the Windows boot line in Grub.
But on one machine, I may add a different hard drive to use for something, and then use Bios to boot from the newly added drive instead of Windows Drive C. All of them appear as drive C, if I boot from them of course, hi hi...

I don't see why you would be having Kernel Problems. Each OS I boot into has its own Kernel.
What I don't like about Grub2 is it does not list previous Kernels. I'm sure this can be changed, I've just never looked into how to do it. I would like to have my previous two kernels available to boot from.

Changing topic:
Before I moved south, and before I started using Linux again, our little computer club was active in building cluster computers, using old steel small parts closets, like tall gym lockers with slots on each side. They used aluminum bakery trays to mount the motherboards and other things on. The wiring was in the front so not fancy looking.
I doubt what they were doing was much different than having a quad or eight core CPU, and they had to write the software to utilize all this. But they made it sound easy as pie, hi hi... The goal for this part of their project was speed.

The second part of their project is the part I was most interested in. But could never figure out how on earth they did it. OLD IDE drives were a dime a dozen, and club members always brought in old ones they no longer used.
They had this huge steel box, 8 slots high by 6 slots wide, they would slide the hard drives into after mounting the rails to the sides of the hard drives. A large blower mounted on the back sending the exhaust air up toward the ceiling drew air across each of the hard drives, thanks to a hose behind each one on the back of the cabinet.

IDE cabling can only take two HDs per cable, plus a mobo could only have two twinned IDE ports or plugs.
Now that we have external USB hard drives, I'm thinking maybe they were doing like I do now, using the guts from an external enclosure along with a power supply for each to power so many hard drives. Although I never saw anything except flat ribbon cables connected to them.

In my case, now that computers only have SATA and USB ports, and I've got at least a dozen IDE drives laying around here. Short of buying more IDE enclosures, is there something I'm missing here?

How does a large company, take Amazon for example, have enough hard drive space to store all the data they store?
Amazon sent me a price list a couple of years ago about the cost to store data on their servers.
Seemed crazy low, but the amount of space the prices showed took me a while to understand.
1 T of storage was 14 cents per month. NOT, it's 14 cents per Gig per month, Ouch...
But the amount of storage you could buy went from 1 T to 50 T to 500 T to 5000 T, so they must have a lot of storage available to sell to offer it in 5000 T chunks, hi hi...
Our home hard drives have just now come out with 2 and maybe 5 T drives since the last time I checked.
And I see cabinets, similar to NAS cabinets which hold several hard drives, but the cost.
Not that I need it, but if I wanted a 500 T system, it would cost well over a million dollars, and then you would need to be able to back it all up.

I talked to an IT guy who worked for a very large company. He said they do not need to back up because every node has the same data. Huh? He didn't have the time and couldn't explain it to me anyhow.
A game like Farm Town has over 4 million players, and they can roll back your farm to a previous date, but have no backup system. How can they roll back if they don't have backups? All the data is available from all nodes and your farm data is just piled up day after day like appended data. Eventually the older stuff scrolls off the system.

As far as a small datafile, I can understand this principle. Have a datafile which hold 500 blocks of data, and when the 501st block of data is added, block 001, actually block 000 falls off the data list. Last in First Out, bottom of the stack deletes.
Some day I'll study up on how nationwide or worldwide servers work to maintain data available from all nodes. But for right now, it is so far over my head, it swims just thinking about it.

Only question: How safe do you consider Cloud Storage to be. What is the chance of losing your data? Or having hackers get into it?

TTUL
Gary
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Awesome Computer

Post by yogi »

You have heard of DDOS attacks to shut down servers I presume. The idea is for the attacker to get his/her entire bot net (which may include your computer) to make requests to the target server. Enough requests coming in simultaneously will shut down the server. It is not too hard to imagine how that works even if you don't know the details. Hackers have shut down large businesses and government computers using this technique. The hard part, of course, is infecting enough computers with a Trojan horse so that they will be available when you need them. I read about how some hacker organization tried to shut down Amazon using DDOS. They couldn't do it. There are not enough hijacked computers on the planet to make enough requests from Amazon's server farm to shut them down. As far as I know, Amazon has never been shut down this way. They simply have enough servers to handle things like the Christmas Rush, so some punk hacker and his bot net is just a minor irritation and not a threat.

If you think Amazon has enough servers to survive any attack aimed at them, think about Google. What the hell do they have that allows them to return about six millions records of data in .43 seconds? That is actually a slow time because most of there searches are completed in a couple dozen milliseconds. If you think that is amazing, then think about what it takes to cache every web site they ever crawled? If it ever was indexed, it's out there. What kind of memory does it take to save almost every web page ever put on the public net?

I don't know how Google or Amazon does it, but I am pretty sure they are not using IDE or SATA or anything you and I can buy from TigerDirect. :mrgreen: I would question the million dollar price for 500TB storage because I can buy a 1TB drive for about $50. That would be about $25,000 retail. I'm not sure I'd know what to connect them to if I had them, but it is an interesting thought.

I'm guessing you don't have to back up RAID systems because they spread a block of data across the entire array. Thus if one drive goes down the others take up the load until you hot swap it out with a clean one. It only takes a minute to repopulate it. I don't think people like Google or Amazon need to back up anything. The data farms use spread spectrum technology, I'm sure, and it's simply always up.

My tower configuration might sound complicated, but it's all pretty basic stuff. The BIOS is set up to prioritize which hardware device gets the boot command first, second, third, etc., etc.. The chosen hardware device has the ability to sort out what operating system gets loaded. In my case Windows gets top priority because it cannot see any of my Linux hard drives. That is exactly what I want because I'm trying to isolate Windows from everything else. Unfortunately, as my above story points out, Windows can see every incident of Windows on a given machine so that it's not easy to separate one Windows OS from its brothers. If I want to boot into Linux, I hit F8 (the exact key is different in every BIOS) and then I get a list of all my bootable hardware. When I pick the Linux drive with Grub, the Grub menu sees it all; Windows included. That's fine because it's not my default boot scenario. The choice of operating system to load varies depending on whether I use the Microsoft bootloader or Grub. Each has it's own list and menu structure. I explained a bit about how Windows uses a BCD to generate menus, but Grub is infinitely more complicated. I read about how it's done, but still can't tell you about it off the top of my head. There is a .list file with the menu of things Grub can boot into, but editing that file is not enough.

Grub was a hopeless case until I found the Grub Customizer package. Since it works in Ubuntu, I'd guess it will work in any debian distro, but I could be wrong about that. The customizer has scripts it runs to look for operating systems, to generate memory tests, and to list advanced options (read that to mean go back to a fallback kernel). The scripts generate a menu which you can completely edit. You can move items around, delete things, rename the menu items, as well as change fonts and backgrounds on the Grub menu. The important point here is the scripts. You don't have to know how to add alternate kernels or operating systems to the Grub menu. There is also an option for where to save Grub. I love this program and would be very lost without it. The program does it all for you.

When doing a fresh install of Ubuntu, it runs something like the scripts in Grub Customizer. Of course there are no fallback options for a fresh install, but the installer does find every operating system on the machine and lists it in the menu.

As you like to tell me when I rant about Linux, the kernel is used in billions upon billions of devices. That is possible because the kernel is so generic. If you want it to do something specific, such as not use the built in nouveau graphic accelerator and use your custom nVidia one instead, you have to add a module to the kernel and then recompile the kernel with your addition attached. Of course some things simply cannot be added because the Linux organization and the nVidia developers don't see eye to eye. Compiling the module for my drivers is the challenge - fortunately nVidia offers one way to do it. However, if you upgrade to another kernel, you have to do the recompiling all over again. It's just one more reason I think FOSS is not an ideal solution.

Regarding cloud storage, I must concede that it's a necessary evil. However, it is a concept that is still a work in progress. Expect it to change in your lifetime. Currently with the proliferation of mobile devices, people tend to have more than one "computer" at their disposal. Syncing them all is a nightmare if you don't have a common storage area for all your preferences. Enter The Cloud. Google, Microsoft, and Apple all have their own clouds for syncing your devices. You can store other data there too, which is touted as a good way to collaborate with your colleagues. Well, that's true only if all your colleagues are on the same cloud as you. When you work for a big company, their IT people figure all that out and in fact may have a cloud of their own for business collaboration purposes.

What about non-business use of the cloud? Well, I'm sure you heard the story not too long ago of how some pop culture figures got their naked selfies hacked and distributed all over the Internet. I thought it was cute, but the owners of the pictures were humiliated. I don't have to state the obvious, but I will. Clouds are vulnerable to hacking, so don't put anything there you don't mind having hacked. Encryption and security is going to get better, and cloud storage at some point in the future will be as safe as putting something on your LAN's NAS. Then, being the skeptical guy I am, there is a privacy issue with clouds. The operators of cloud storage will swear on a stack of Bibles that your data is safe with them and that they won't hand it out to anybody (unless the Feds happen to take an interest in what you are doing). Well, that's as reassuring as my doctor's nurse having access to my medical information. She is bound to be professional and honor my privacy, but what if the nurse has less then perfect morals? Do I trust the doctor's entire staff with my medical information? Do you trust employees of Microsoft, Google, or Apple with access to your business data?

Cloud storage is necessary in some cases and risky in others. It's up to you to decide what you can trust putting out there. Personal records, probably don't matter. But, business information might be better placed on a private cloud where you know who it is that can see your confidential data.
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Awesome Computer

Post by Kellemora »

Hi Yogi

Yes, I've heard of the attacks where thousands of computers all hit the same server at the same time. Didn't know what it was called though.
Google is Amazing to say the least. You can look up your original first index.html page to see what it looked like, and every so often after the original, or each time it had a major design change, they store a copy.
Eon's ago, when we used PayChex to do our payroll, they stored all of our company data forever, but it was done on tape drives, and my not be accessible unless you had them mount the tape for the years in question. In later years, everything was accessible at any time, no having to mount a tape reel anymore, but how I don't know.

I agree with you about Grub2. The original Grub was easy to maintain and change, because you did so right on the grub file. The newer version, like a few other things in Linux, have one file you can alter, but it has to be converted to the file Grub reads. Sorry I don't remember the names of these files without looking it up again. I've set all of my computers to use the package maintainers version of everything. It keeps me from messing things up, hi hi...

I don't understand WHY they set it up so things have to become part of the Kernel? So agree with you about FOSS!
Seems to me it would make much more sense to have a stand-alone file for all Drivers, and the Kernel could look to this file for which driver to use for each device. Windows may have the better way of doing it in this regard. Although I do know some things you have to reboot Windows after changing the driver, but other things you do not.
Plug in something new in Windows and it says New Device Found, Installing the Drivers for it, Your New Device is Ready To Use. However, that being said, Linux does the same thing IF the driver is in the Kernel already. Perhaps in Windows, if you have to reboot, it is because the driver is not in the Kernel, and it handles this automatically.

I have the driver for my Radeon 8500 graphics, installed it once, then a Kernel upgrade gave me the black screen of death, and I didn't know how to recover from it, since Grub did not have the previous Kernel as an option to boot from.
When I do boot up, I get the boot screen message I need to turn on blah de blah to use the proper driver. Trouble is, if I turn on blah de blah, I still get the black screen because the driver package only covers the 5000 and 6000 series.
I know in Debian 8, the 8000 series driver is in it, but from what I've tried of Debian 8, I don't like it at all. It can probably be changed to the way I like it, but I've not found the necessary setting or files available to do so yet.

All I can say about my NAS, is the same thing you've heard me say before. If the controller card goes bad, you are SOL. It is a good thing I backed up my NAS, because the lightning strike took out my NAS. To compound the issue, the controller system used in the NAS was not anything near standard to anything. So even though the drives still had the data on them, it was not accessible. So, using my old method of mirroring my storage hard drive to an off-site location is still the best way to go. In my opinion anyhow. I'm just glad I did not trust the NAS, and had this fear of the controller card going bad. Turned out to be a valid fear! Also, if my house burned down, without an off-site backup, I still would have been SOL.

When I checked into on-line storage two years ago, almost all of the prices I was getting were in accord with each other. Roughly $1.99 base monthly fee plus 14 cents per gig per months for 1 terrabyte to 50 terrabytes.
Since then, I found enough storage space on existing places, such as my Comcast account where my web space is held. But, this could be deleted and is not secure.
Google has a much better deal currently. However, by my calculations, my cost would have been around $5.00 per month, but their bill was for $52.60 for the first month, and $41.00 after I pulled everything down, because I crossed into the next week, two weeks after the bill arrived. At least they didn't charge me for the entire month like they say they do. Maybe they did, but I had no access calls after taking everything down.

Heck, it would cheaper just to get a hosted website and use it for storage instead of web pages, hi hi...

Have a great day Yogi!
User avatar
yogi
Posts: 9978
Joined: 14 Feb 2015, 21:49

Re: Awesome Computer

Post by yogi »

Distributed denial-of-service (DDOS) attacks have been around for a long time. It's a pretty simply way to do some serious damage, but it apparently has become big business defending against it. It's like antivirus software. There are a ton of companies out there selling defense system software and it's enough to make you think those same companies are initiating attacks so that you feel a need to buy their products. The control daemon (demon?) sits inside your desktop dormant until the day comes for an attack. Then the head bot net dude broadcasts a general command for all those dormant Trojan Horses to call up a web site. You don't even know it's happening on your computer because it takes so little to request a web page.

GRUB is an interesting critter. One reason I suspect they had to upgrade it is for security purposes. It was too easy to modify prior to Grub 2. I'm also guessing that it's part of the Linux community attempt to implement the UEFI standard. By the way, if you are interested in a fairly easy to understanding of UEFI, take a few minutes to read this: https://www.happyassassin.net/2014/01/2 ... work-then/

The Linux kernel, like anybody else's kernel, does some very fundamental things. It sticks to the basics unless you are designing your own personal kernel which can include a lot of bells and whistles. The Linux kernel is free, and that is a good thing because it appeals to the masses. But, the Linux kernel is way too generic to be useful in its bare bones form. Thus the Linux kernel is maintained to stay current with today's technology, but it is still stupid at the core. In order to make the kernel more suitable to your needs, or my needs (the guy with an nVidia video adapter), things called modules are added to the kernel. The benefit of being inside the kernel can only be had by recompiling it with these modules added on to it. All that would be lovely if they did not update the base kernel every other week. When a new kernel comes down the pipeline to replace your old one with the modules added, you suddenly lose functionality when you upgrade the kernel. The Linux community, of course, claims it's flexibility is the appeal. Us guys with nVidia cards think it's a PIA.

Your concern about NAS controller cards should be no greater than your concern for any mechanical hard drive inside your computer system. The beauty of NAS machines that I've seen is that you can hot swap the drives from one box to another. It's the same as any computer system where the drive is just a chunk of memory that can be replaced easily. Therefore, it's prudent to run disk checks periodically to see how well your HDD is performing. Replace the sucker before the controller goes bad. My next NAS going to be multi-bay so that I can keep a spare drive handy, plug it in, and copy one over to the other when the disk errors become too numerous. Then again, since my NAS is basically a file server and archive, I may decide to eliminate the entire problem by replacing the HDD's with SSD's.

You must know by now that you can never have too many backups. It's a formula for disaster if you rely solely on your NAS to keep your business records and disaster recovery plan safe. I"d say no less than three separate pieces of hardware in three different physical locations is the only way a business can have a chance to recover from a data loss disaster. The cloud is a great place to keep a copy, St Louis is a great compliment location, and your NAS is convenient. Having what you need to fully recover in all three places is a smart way to do things.

You make an interesting observation in your last statement. You can get a website host like ours very cheap. We have unlimited space on the server and unlimited upload/download quotas. I've never tested it to see exactly how "unlimited" it is, but that is what the contract says. I requested a mySQL database to be installed because that is what this website needs to run. I never touch the database personally, although I have made a few SQL level queries just to prove I can. It depends what kind of data you have, but I am sure you can construct a website like this one to store it. It's messy to do it that way, but hey. We get the small businessman's rate by having a web site attached to our database. LOL
User avatar
Kellemora
Guardian Angel
Guardian Angel
Posts: 7494
Joined: 16 Feb 2015, 17:54

Re: Awesome Computer

Post by Kellemora »

Hi Yogi

Wow, that was a LONG but VERY Interesting Read!
Regardless of what they say, UEFI may be a great improvement over BIOS, BUT and that's a big BUT, hi hi...
As other comments pointed out, MOBO's with on-board UEFI are shipped LOCKED to Windows ONLY.
I know, I got stuck with one and had to return it for one which did NOT have the Windows ONLY LOCK on it.
It is NOT illegal for a Mobo manufacturer to make a mobo designed for a specific OS. But, as the article said, it would be illegal for Mickey$oft to require same to install their OS.

I checked into why a Linux Kernel upgrade breaks the install and Windows does not.
At first I thought it was because doesn't update their kernel, but learned they do, almost as often as Linux.
The difference turns out to be fairly simple. Windows keeps track of what new drivers you installed to the kernel, so the next time you change the kernel, it adds to the kernel any driver you installed to the old kernel to the new kernel.
So, if you have an Nvidia driver for a particular graphics card, this info is stored and used with the next kernel upgrade. Or put bluntly. If a change to your Windows computer requires a reboot, it probably had to do with a change to the kernel.
Linux has no table where it stores what changes you made to the kernel, so when you install a new kernel, you have to add to it whatever you installed to the one you are overwriting.
This is why original GRUB always kept your previous kernels, so you could boot into them. Grub2 does not do this, because it is UEFI compatible, or so they say.

About NAS:
I did some checking around after the lightning storm took our out, hoping to find one I could just place the hard drives from my burned out NAS in. This is when I found the particular NAS I owned used a Proprietary Controller Card and their own type of RAID Style System. In other words, the hard drives out of it could not be read by any standard RAID configuration.
Now, if I would have gone with a much more expensive NAS which used specific and common RAID architecture, my drives would be interchangeable with other RAID systems using the same architecture. Hardware RAID is always preferred over Software RAID.

I could almost feel you chuckle at my paranoia and keeping so many redundant backups.
Even with keeping redundant backups, I still feared a file becoming corrupt, and the corrupted file overwriting my backups destroying the file on the backups as well. Which is why I always saved a known working file and checked it, and never overwrote it until I checked the files in question and found them to be in perfect working order.
After we LOST hundreds of Debi's pictures due to a bad backup program which only saved LINKS back to the hard drive we thought we were backing up. I now only use Rsync for all backups. I have to be careful though, because if I don't use the delete on destination feature. I end up with more files at the other end than I bargained for.

Currently, I only have a backup in my office, and a duplicate of it in the house. I no longer have storage back in St. Louis, since my brother downsized his company again, and only works from a couple of laptops.
He does his accounting on-line now (brave little soul that he is, hi hi).
In fact, many of the programs he uses are on-line instead of in his own computer. He does copy a file to his computer just in case, but he no longer keeps all those backups he used to either.

I was at SAM's Wholesale yesterday, buying a TV to replace our newest one, which had the power supply go south.
They had several external hard drives on sale. Not enough information on the display card or box to make a well informed purchase. Like what quality hard drive is used. They had Western Digital My Books where the prices didn't make much sense. A 3 T was 50 bucks higher than a 4 T. The 3 T said something about syncing with DropBox. I already have DropBox and don't want my storage drives to be synced with it. So I bought the 4 T, and hope it has something better in it than a green drive, hi hi...

I normally buy in pairs, so I have one in garage, one in house, and mirror them together. But this time I only needed one, because I'm using so many drives up here, I can save them all to the new one I placed in the house.
I can't afford to replace my NAS, which was slower than onboard drives anyhow. Two of my computers up here have 1 T drives, and I copied all the data I called my File Server to the computer I use most. Made a mirror of it in the other computer, set it as read only so I don't change it. But made a shared folder on the first computer to save new changes to. When I'm back on the first computer, I will add those changed files to the file server on my computer, mirror it back to the other computer so they are both up to date. Then when I have time, I add them to the main hard drive I call my file server. Sounds a lot more complicated than it really is. I have a folder named "(#1)files changed to copy to file server" and another folder named "(#2)changed files moved to file server, delete when file server backed up to house."
I use initials instead of spelling out the whole phrase, hi hi...
After I copy the files from folder #1 to the file server drive, I move them to folder #2, and delete the ones in folder #1. Then when I backup the File Server Drive, I delete the ones in folder #2.
FWIW: Folders #1 and #2 are also backed up to another local drive on another computer, just in case.

And this has gotten way to long again.

TTUL
Gary
Post Reply