Wednesday, December 30, 2009

OpenSolaris is a Failure

Yup... it's official. OpenSolaris is a Failure

Let me start off by saying I am a HUGE fan of Solaris. I am a Sun Certified Systems Administrator for Solaris 9 and Solaris 10. I would choose Solaris/SPARC over Linux/x86 for mission critical enterprise applications all day long. Why, because it is rock solid and it is proven to be reliable. Now as far as OpenSolaris is concerned I consider it to suck and here are a few reasons why.

NO TEXT BASED INSTALLER

Believe it or not you are unable to install OpenSolaris from a text based terminal. You are forced to use the gui-install command to get the system installed. I recently installed VirtualBox on my Ubuntu 9.10 laptop and ran into problems starting the graphical login interface on the OpenSolaris cd that was sent to me directly from Sun. After watching the system hang when trying to render the screen time after time I rebooted and chose the text based terminal option. I figured I could just login through the terminal and run the installation in a ncurses style interface. I was sooo wrong.

After booting off the cd to a text interface I was able to login to the system using the username/ password combination of jack/ jack. Once logged in I was allowed to su - to root using the password opensolaris. However, I was not able to find a method for installing the system onto the hard drive.

I ran the startx command and was greeted with a grey screen with an X for a mouse and was unable to do anything else. Luckily I was able to crash the startx command and search for other options. I attempted to locate some dt commands that were not found on the system. Then I attempted to run xinit and pass the option of gnome-session to the xinit command which did not work. I finally was able to work around this problem because I have many years of Linux under my belt. I started using Linux back before your X settings were autodetected and you were forced to use things such as SuperProbe and XF86Config to setup your graphical environment. The way I was able to finally get OpenSolaris installed was by running the command xorgcfg.

Once xorgcfg ran it created an X environment with the old style configuration setup. However when inside of the xorgcfg session I was able to click on a blank space on the desktop and launch an Xterm. From inside the Xterm I ran the gui-install command an voila... I am now installing OpenSolaris.

Think I'm lying about that... check this out.


Wow.... all those painful nights attempting to configure X have finally paid off for me :)


WHY OH WHY MUST YOU HAVE INTERNET FOR ZONE INSTALLS????

This part is really frustrating and does NOT behave the same way as Solaris. I have attempted to setup a non global zone using the zonecfg command. I issue the create -b statement as to invoke a full-root zone creation. Next I set the zonepath to a directory that I had previously setup and chmoded to 700 permissions. Now when I invoke the zoneadm install command it fails miserably complaining about unable to download from opensolaris.org. FAIL! Why would you ever need to contact opensolaris.org to do a zone install????? This is one of the biggest FAIL moments I have ran across with OpenSolaris. Currently, I am not online with my ethernet card. This creates problems with VirtualBox because I do not have networking functional. However, installing a zone should only copy packages from the global zone into a local container. Why would it ever need to contact the internet to do this?

Here is the proof




Very poor performance by a operating system that attempts to claim a part of the Solaris name. This is not Solaris and it definitely shows. This project has even claimed to be a Linux variant. I will continue to fiddle with OpenSolaris but at this point I am saying that it sucks as a project. When it doesn't just work out of the box then all the features such as dtrace, zfs, zones, etc do not matter at all.

In closing, I was going to sign up at the OpenSolaris forums and try to explain to them how I received a cd directly from Sun and so far it sucks but I decided that it would not be worth my time. Next month, when Oracle finalizes it's purchase of Sun, OpenSolaris will most likely die anyway. I guess I will have to go back to using Solaris x86.... at least it works out of the box.

Friday, December 25, 2009

Slackware Sound Card Problems

Recently my 4-5 year old laptop bit the dust due to hardware failure. So I have been forced to knock the dust off the old Slackware box and use it as my primary computer. After performing the steps needed to convert this old linux router into my new desktop I have ran into a problem with the sound card.

root@solarislackware > lspci | grep Multimedia
00:1f.5 Multimedia audio controller: Intel Corporation 82801BA/BAM AC'97 Audio (rev 11)

Great... I at least know what the sound card is even though it's not working correctly with this old Slackware 11 installation.

root@solarislackware > modprobe snd-intel8x0
insmod: insmod snd-intel8x0 failed

well... looks like I don't currently have the driver I need for this sound card.

root@solarislackware > modinfo soundcore
description: "Core sound module"
author: "Alan Cox"
license: "GPL"


Well, at least I have the sound core module loaded into my current kernel. Now I need to install the ALSA packages from my old Slackware install cd #1. I put the cd into the drive and mounted the disk to /mnt/cdrom. Next cd /mnt/cdrom/slackware/ and run the following command

root@solarislackware > for i in $(find ./ -name '*alsa*.tgz' -print) ; do installpkg $i; done

The end result is that alsa-driver, alsa-utils, alsa-oss, and alsa-lib all get installed onto the system. Next I ran the alsaconf to config my card, alsamixer command to set my volume levels followed by alsactl store to save my config.

root@solarislackware > alsaconf

root@solarislackware > alsamixer

root@solarislackware > alsactl store

I checked my work by clicking on kmix and when it loaded it was no longer useless. kmix actually has some pretty controls to use and it had found a sound card on my system.

I now have sound on this old computer. I tried to build ALSA from source but it just was not working for me. Luckily I had the Slackware 11 install cd handy and was able to get what I needed from there.


Saturday, December 12, 2009

Simple uses for Sed

Picking up where we left off on the Awk article

We have the following output from a command.

sed@intro > awk ' { print $2" "$1 } ' test.txt
Phone User
555-555-5553 tom
555-555-5552 larry
555-555-5511 daryl

Problem here is the column titles for Phone and User are not aligned with the output information.

Enter Sed

sed@intro > awk ' { print $2" "$1 } ' test.txt | sed 's/Phone User/Phone........User/g'
Phone.............User
555-555-5553 tom
555-555-5552 larry
555-555-5511 daryl


Sed the stream editor is being used to format the output from awk using a search and replace function globally. sed 's/Pattern to match/ What to replace it with/ g'



Simple Awk Usages

Say that you are logged into a bash terminal session on a local linux box. You have been asked to extract some information from a text file that is on the system. The text file could be 100,000 records long. For this example it has a header line and 3 data lines. The contents of the file are as follows

awk@intro > cat test.txt
User Phone Zip
tom 555-555-5553 55555
larry 555-555-5552 99999
daryl 555-555-5511

Now, the problem that we have is that we need to take this file and re-arrange the information to best fit our needs. The phone number needs to be the first thing that we see followed by the username. The zip code should be left out of this report. How can we easily accomplish this task?

Enter Awk.

awk@intro > awk ' { print $2" "$1 } ' test.txt
Phone User
555-555-5553 tom
555-555-5552 larry
555-555-5511 daryl

That was easy. Awk splits the file into numbered variables for the columns inside of test.txt. Since we want the phone number first we tell awk print $2" "$1 which gives us the second column, a space, and the first column. This job is done, that was easy.

Linux Distrobutions

Depending on the situation, I use a wide variety of Linux distros to accomplish the task at hand.

For my home laptop I use Ubuntu.

The reason I use Ubuntu is because it just works. It's very easy to maintain and it will interact with almost anything I throw it's way. I have been quite pleased with 9.04 thus far and have been using it exclusively for 3 months or more. It detected all my hardware and has been rock solid. I had to work through a few oddities like the flash player and various media codecs but that is to be expected. I don't have to think twice about this system, it just works.

For my home computer that acts as my router I use Slackware

Slackware is extremely solid. I will do exactly what you tell it to do. The problem is all the time you have to spend setting it up to do what you want it to do. Package management tools like apt-get and yum just don't seem to exist for Slack. You compile alot of things from source and resolve the dependencies yourself. However, once your done you can pretty much walk away from the thing and know that it will keep churning along.

I have a few CentOS boxes I use for various tasks.

CentOS is kind of in the middle between slack and ubuntu. Cent has the friendly tools for package management but you can also build things from source fairly easily. I consider it to be a happy medium between the two.

I have also dabbled with Red Hat before they started charging, mandrake linux, fedora, suse, puppy, knoppix, kubuntu, debian, caldera, and various others from time to time. Each one has it's good and bad points. I would suggest reviewing your options before selecting your distro. Depending on the task at hand, you have alot of tools to choose from

check out distro watch for more options http://distrowatch.com/

Friday, December 11, 2009

Problems I have with CentOS

Let me start off by saying that I like CentOS. Don't get me wrong. It is basically a free version of RedHat Enterprise Linux. Having said that though, I have encountered a couple of annoyances with the distrobution that I would like to address.

I have not had any first hand experience dealing with RHEL. My complaints about Cent could very likely hold true within RHEL, I do not know for certain. If this happens to be the case please feel free to comment on this article and let me know.

CentOS is your basic rpm based distrobution which is another RedHat spinoff. I have installed Cent on a few different machines and have been maintaining system updates. Yum, which is a very handy tool, is extremely easy to use for system packages. However, one thing I cannot comprehend is the way CentOS package versioning works. This in itself is my main complaint.

For example, you have installed the httpd apache package onto your system. A security update has been released and a new version of apache is available. Yum check-update will inform you that a new version is out and you should update. Using yum update you can easily download the new package and install it on your server. Here is where the problem begins.

The Cent Apache version reports that it is version 2.2.3. However, 2.2.3 is an extremely outdated version of Apache. The latest version acording to the Apache website is 2.2.14. This is definitely a reason for concern. The problem is that the Cent version of Apache 2.2.3 is really equivalent to 2.2.14. The CentOS moderators have applied all the patches needed to step version 2.2.3 up to version 2.2.14. If this is the case why do you still see 2.2.3 you might wonder.

The 2.2.3 package contains all the fixes associated with the new version, they (Cent) merely apply the fixes to the older package and leave the version number. I had a hard time understanding this at first but after working with it for a while I am beginning to comprehend. Even though remotely I detect 2.2.3 it is truly 2.2.14. The problem I have with this has to do with compliance.

For example, say you are an e-commerce site that gets PCI audits. The PCI compliance auditor scans your server and reports that the version of Apache you are running is old, outdated, and contains security problems. We know that this is not true due to the CentOS versioning strategy, however it still pops up on the report and we have to deal with the problem. The same goes for other packages such as kernel files or ssl versions.

Why does Cent feel they should patch and release the old version as opposed to releasing a new updated package which contains the appropriate version number. These type of problems can cause much greif for administrators. Would someone please give me a logical explaination as to why they would update packages in this fashion? It just does not make sense to me at all.

The rpm file might give more detail about the package / version combination that you have installed on your system but that does not really matter. I say it doesn't matter because the security scanners will be coming from external sources and will not have any knowledge of the actual system packages. Only the version reported by the daemon itself will be used for testing.

Another complaint I have regarding Cent has to do with yum. I really like the yum tool for installing packages and system updates. However I have found a few problems using the automated yum installer/updater.

First, I have a system which I had installed apache/php through the yum repositories. A few weeks pass and there are updates for those 2 packages. After applying the updates I find myself looking at 3 php modules that now fail to load. Sure, this is my fault I guess. However, the point of a tool such as yum is to make things as easy as possible. If those packages get updated yum should be intuitive enough to fetch updates for the other packages that are built based on php or Apache. I now receive a size mismatch module load error from php which I have fixed for now by disabling the three modules in question. Why would yum not have knocked this out the box to begin with? The automated tool I am relying on doesn't seem to keep my system packages aligned.

Second, I had to install the rpmforge packages to get anything useful outside the base distro packages. I do not have a problem with that, it is helpful that something such as rpmforge exists. However, I have ran into problems with other packages that I typically use. RKHunter is a very useful tool that I like to install on my linux systems. After installing rpmforge it was very easy to install. yum install rkhunter. boom, just like that after answering y to the prompt I have rkhunter installed. When I attempt to run rkhunter --update it fails miserably based on a null variable error. The rkhunter-update.sh script doesn't seem to locate the version number and throws an exception when it hits an if statement that evaluates version numbers.

This is kind of annoying. It spits out some xml code blurps and dies off. When running the rkhunter -c command it works and runs through the system checklist. It finds a few typical CentOS directories and has a fit about them /dev/.udev as I remember and a few others. If this package has been configured for CentOS then why would you not fix these common problems? It's little things such as these that are a turn off for other people. I have been using Linux for long enough that I can most likely identify and fix the issue, but your average newbie admin would be hitting the wall here and thinking about another solution.

Cent seems to be rock solid, it has some Unix like features such as the sysctl configurations. I think it is a great alternative to paying for RedHat Linux and I will continue to use it. I just have a few problems... that's all.

Saturday, December 5, 2009

Nesting commands in a bash shell

How many levels deep can you run commands under bash?
Using the back tick operator it is possible to execute a command first and pass the result backwards.

bash-magic > more `find /etc/ -depth -name resolv.conf -print`
::::::::::::::
/etc/resolv.conf
::::::::::::::
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4

This can come in quite handy when using the command line. The example above execs a find command and passes the output to the input of the more command.

The backticks ` ` can only do one level of depth within the shell. But what happens if you need to string more than 2 commands together? Backticks just will not cut it.

Enter the almighty parentheses ().
bash-magic > ps -ef $(grep $(id))

Here is an example of 3 commands deep within the shell. The ps -ef command calls a subcommand grep which calls a subcommand id. This could have been accomplished using ps -ef | grep `id` but for examples sake we are using parentheses ().


Logging interactive shell sessions to a file

Recently I wanted to find a way to log everything a user does when they log into my system. For example you have a vendor who might login to one of your systems in order to modify an application. You can see that they have logged in and might be able to find processes they are running using ps -ef… but what if you want a log of the entire session for a specific user.

Here is how we do it.

First I create a subdirectory under /tmp named watch which will be used for storing the log once the user exits the session. We also need to change permissions on the directory so that it is writeable by the account we want to monitor. In the following example I am using remoteuser for the user and remotegroup for the group. Modify these values to fit your needs.

watch-you@my-host > mkdir /tmp/watch
watch-you@my-host > chown remoteuser:remotegroup /tmp/watch/

Now we have the directory setup for the log file we need to put the magic into motion. I fumbled through attempting to set this up using the local users .profile and .bashrc but was running into all kinds of problems when a shell session was invoked. The solution was to put the code into the system wide profile which can be found at /etc/profile

open up /etc/profile with your favorite editor ( if you need to use sudo when opening the file ).

watch-you@my-host > sudo vi /etc/profile
skip to the bottom of the system profile and add the following lines at the very end.

if [ "$USER" == "remoteuser" ]
then
dte=`date +’%Y_%m_%d_%H_%M’`
export dte
/usr/bin/script -q /tmp/watch/$dte-session-$USER.txt && exit
fi

This script checks to see if the current user is “remoteuser” (the one we want to monitor). If it is, then we setup a variable named dte with a timestamp and export it to the shell. Next, we call /usr/bin/script -q /tmp/watch/$dte-session-$USER.txt. This is where the magic happens. The script command is passed the -q (quiet) to supress the end user alert that script has been activated for this session. Next we pass the script command the full path to our log file. The reason for && exit at the end is due to the fact that when the user types exit wanting to leave the shell, the first time they type exit the system only exits the script command. This would alert the remoteuser that something fishy is going on. The && exit forces another exit command to be passed to the shell after the user exits the session. This allows us ( the paranoid system administrator) to avoid detection.

The user logs into the system and script is active. They run a few commands and logout. Once they logout the contents of thier terminal session is written to our file and left for us to review.

ssh remoteuser@my-host
watch-you@my-host > echo “WOW THATS NICE”
WOW THATS NICE
watch-you@my-host > exit

From another session cd into /tmp/watch and see if we have a file.

watch-you@my-host > cd /tmp/watch/
watch-you@my-host > ls
2009_12_05_12_05-session-remoteuser.txt

When we view the file we can see what the user was up to.

watch-you@my-host > more 2009_12_05_12_05-session-remoteuser.txt

watch-you@my-host > echo “WOW THATS NICE”
WOW THATS NICE
watch-you@my-host > exit

There you have it… a dirty way to track user sessions using the script command inside of /etc/profile