TimeLinux1

Thursday, September 30, 2010

Linux HowTo: suid, sgid, sticky bits and others


Directory Permissions:
-If you have only read permission and no exec permission on a directory, you can neither view nor access the contents of that directory.
-If you have only exec permission and no read permission on a directory, you cannot view the contents of the directory but only access them.

suid and sgid:
-Normally, programs run with the invokers permissions, not the owners.
-But with suid and sgid, the programs run with the owners permissions, not the invokers.
-suid and sgid have an 's' bit in place of the 'x' bit in the permission list.
-such programs are called suid programs or sgid programs.
-octal value of suid = 4, sgid =2 and suid+sgid =6. 
-eg: chmod 6755 afile, chmod 4755 afile, chmod 2755 afile.
-if the file is executable, suid or sgid are represented by lowercase 's' in ls -l output. if it is not an executable, suid/sgid is uppercase 'S'.

sticky bit:

-sticky bit - prevents 'world' users to delete files from a dir even if they have write permissions on the parent dir.
-sticky bit is represented as 't' or 'T' for world users. t = exec but no delete. T = neither exec nor delete (just like suid,sgid s or S).
-sticky bit is represented by octal 1.
-eg: MS-KK-Laptop:~ Kali$ chmod 1544 ab/bb/bbc
-r-xr--r-T 1 Kali staff 8 Apr 9 22:15 ab/bb/bbc
MS-KK-Laptop:~ Kali$ chmod 1545 ab/bb/bbc
-r-xr--r-t 1 Kali staff 8 Apr 9 22:15 ab/bb/bbc

Note:

-file permissions (rwx) and access modes (sStT) hold good for non-root users.
-in other words, root users can delete files even if they dont have the permissions / access modes set so for the file.
-to prevent such accidents, the 'immutable flag' is used.
-immutable flag prevents even root from deleting files until the flag is unset.
-to set the immutable flag, chattr -+i cmd is used.
-to view the immutable flag, lsattr cmd is used.
-eg:  # chattr +i afile                                        [ sets immutable flag for afile, even root cant delete it ]
        # lsattr afile                                             [ shows immutable flag permi ]
# chattr -i afile                                                 [ unsets immutable flag ]
-man capabilities for more on immutable flag.

-umask - permissions that a user does not want to grant automatically to newly created files / dirs.
-umask is like the octal-negative of file permissions. eg: umask 022 => default permi will be 755.

Linux HowTo: Useful Shell Commands - 1

Here is a little compilation of useful shell commands. The first in the series. More posts will follow in the same vein..
===
-Processes:

       # ps -ef                                         - every process and full output
       # ps -fu mrinal                              - full output of processes running as user mrinal

-Sessions:
 
        # id -un                                        - Currently logged in user
        # last mrinal                                - Last time user mrinal logged in
        # ls -A                                          - List All files but not . or ..
-Files:
         # find . -mmin -30                       - Find files modified in last 30 min; +30 = more than 30 min ago
         # find . -ls | sort -nrk7                 - Find and sort by size
         #tac afile                                     - outputs  afile in reverse order ( opposite of cat ).
         #od                                             - dump of a file ( default is octal dump, thats why od)
         # od -An file                                - A=marks the type of dump denoted by n. n can be o (octal), d (decimal), x (hex)
         #pr   afile                                    - format afile for print [ adds a default header - date time, filename and page number ]
         #nl   afile                                    - numbers the lines of afile in the output
         #ls -lu afile                                  - reports last access time of afile

 


Wednesday, September 29, 2010

Linux HowTo: Netfilter and NAT

-Linux can act as a full featured router. Many commercial routers run the linux kernel.
-A standard pc with afew network cards can act as a basic router.
-To set routing (ie ip forwarding on), do this:
        # echo "1"    >    /proc/sys/net/ipv4/ip_forward

-netfilter    - the packet filter or firewall system built into Linux.
                 - it helps a system decide how a packet should flow.
-iptables    - the cmd line tool to manage netfilter.
-netfilter tasks:
        . nat
        . mangle
        . raw
        . filter

-nat      - network address translation
            - allows multiple systems to access another network via a single ip address, ie like a door or gateway.
            - gateway    = nat + routing
            - firewall     = nat + connection tracking

-mangle    - marks and alters packets in specific ways (eg changing type of service bits in pkts to quality of service bits)
-raw        - used for connection tracking at low level
-filter       - provide basic filtering
-nat allows a sa to be able to hide hosts on both sides of a router to hide the two sides from each other.
-ie due to nat, the two sides are unaware of each other, only the router matters to them.
-netfilter nat:
        . source nat        (snat)
        . dest nat        (dnat)
        . masquerading
-snat        =    hides source ip and port to look like a fixed ip        (eg home private lans)
-dnat        =    changes destn ip and port                    (eg server farm lan)
-masquerade=    special case of snat used in firewalls w dynamic ip    (eg home private lans)
-chokepoint    =    the ip that acts as the gateway in netfilter nat.

-how does netfilter nat do its address translations?
    -by maintaining an internal list of connections & the nodes--this list is called 'flows'.
    -the flows have no idea about the contents of the connection only source-destn mapping.
    -the flows look like        -    <ip addr of node>: <port>

-since netfilter nat doesnt know the contents of the connection, it can be a problem when malicious activity happens.
-to prevent this, Linux has something called 'stateful connection tracking' that reads the header of each pkt to decide if its good or bad.
-stateful connection tracking can be achieved wherever nat occurs.

-chains    -    a list of rules that define how a packet flows in netfilter.
-chain types    -    input, output, forward, pre-routing, post-routing

-to list and verify netfilter is installed compiled and working:
        # iptables    -L
        # ip6tables    -L                    [ ipv6 ]

-netfilter config file:
        . /etc/sysconfig/iptables-config            [ ip6tables-config for ipv6 ]
       
-useful netfilter cmds:
         # iptables    -t < table >    [ -A | -D | -R | -L | -F ]   <chain>    <rulespec>
-eg    # iptables    -t  filter  -A   input  -p tcp  --dport 80  -j accept        [ accept all packets destined to tcp port 80 on input chain ]
        # modprobe    iptable_nat
        # echo  1  >    /proc/sys/net/ipv4/ip_forward                    [ sets routing, must for nat ]
        # echo  1  >    /proc/sys/net/ipv4/tcp_syncookies                [ sets syn cookie protection ]
        # echo  1  >    /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts        [ disables icmp broadcast / smurf attacks ]

Linux HowTo: RAID Concepts

RAID or Redundant Array of Independent Disks is a method of data protection in the event of hard disk drive failure. As you know, all your data lives in the form of files on some form of media -- Hard Drives, CD Drives, DVDs, USBs, SD Cards etc. So if you are concerned about the data protection and recoverability, it helps to maintain multiple copies of the data on different media. RAID works similar to that but on large number of hard drives arranged in the form an array. This is especially important for Enterprises and Businesses who value their data immensely. Here are some RAID or simply raid concepts--I've deliberately tried to keep it more readable by not including specific commands. If you are looking for specific raid commands, try googling for them, there are tons of resources, tutorials & user-groups out there that can be helpful..

-RAID    =    redundant array of independent disks
        - makes several independent disks to appear logically as one.
        - eliminates spof  (single point of failure)
        - spreads IO for performance
       
-Raid terms:
        - Array    = a collection of disks logically grouped to appear as one to the application.
        - stripe width    = the number of parallel units of data that can be read or written simultaneously = usually equal to the number of disks in the array. The more the better.
        - stripe size    = the amount of data to be written simultaneously.
                = less stripe size means more disks (spreads) . more stripe size means less disks (faster)
                = is usually a dynamically configurable parameter (unlike stripe width which is fixed = n disks)
        - chunk size    = subset of stripe size. Also called striping unit.
                = the amount of data written to each disk in one swipe as part of the stripe size.
                = the optimum value depends on IO rate. Ideally each IO request should be serviced by a single disk.

Raid types:

-Striping:
        - also called raid 0.
        - The technique of writing chunks of data across a set of disks.
        - data is also read back in the same fashion, in chunks.
        - the idea is spreading read-write across several disks will have better bandwidth and performance than a single disk.
        - while disks in raid 0 can be of different sizes, but it is recommended to have same size. This prevents concentration
            of data in the bigger disks of the array.
        - raid 0 gives good performance but poor recoverability. If one disk in the array fails, the whole array fails.

-Mirroring
        - also called raid 1.
        - The technique of writing the same data on multiple disks.
        - The number of mirrored disks is the size of the array. eg 4 disks, 2 mirrors=> size of array = 2.
        - raid 1 gives high recoverability but lesser performance.

-Parity
        - Is the XOR information that can be used to recreate lost data in case of disk failure in an array.
        - This comes useful in a raid 4 or raid 5.
        - XOR truth table says 0 if both 1s or both 0s. 1 if one 1 and one 0. eg 1xor1 = 0, 1xor0=1, 0xor1=1

-raid 4
        - also called dedicated parity in which one disk is dedicated for storing parity info.
        - requires 3 disks at minimum.
        - parity information is generated and stored along with each write operation.

-raid 5
        - also called distributed parity in which parity info is stored successively on each participating disk in the array.
        - requires 3 disks at minimum.
        - parity information is generated and stored along with each write operation.
        - performance is slightly better than raid 4 because the parity is spread across all the disks.

-Note:   
        -The three basic raid architectures are raid 0, raid 1, raid 5. (not raid 4?)
        -These basic raid architectures can be combined in a variety of combinations to produce "hybrid" or "nested" archs.
            eg:    raid 0+1, raid 1+0, raid 5+0
        -In either raid 0+1 and raid 1+0, the actual usable disk space is half the available disks.
        -eg:    if 10 disks are available, the actual usable disk space is 5 disks.
        -However, fault tolerance of raid 1+0 > raid 0+1

Tuesday, September 28, 2010

Linux HowTo: IPv4 Addressing Basics

The Internet Protocol (IP) is part of the TCP/IP stack. IP is the protocol that is responsible for addressing the nodes on a network. The most common example of such a network where this is in play is the Internet. The IP protocol helps uniquely identify each node on the Internet. The IP addressing scheme we currently see in use is the version 4 and is called ipv4 for short. An ipv4 address is 32 bits long and is represented as 4 dot separated decimal numbers between 0 and 255 (eg 192.168.1.1 or 10.10.1.10 note: the first number cannot be zero) for human readability (the computers translate it to binary for their communication). ipv4 can support about 4 billion addresses, but since the Internet is growing explosively, 4 billion is just not enough. Therefore the new version 6 of IP (ipv6) has been developed and it can support a mind blowing number of hosts (how about 1 followed by 38 zeroes - yes, that big !!). However, in this post we limit to ipv4.
===

-Every IP v4 address has four 8 bit octets represented like N.N.H.H, where N vary between decimal 1-255 and H vary between decimal 0-255 (note the first octet cannot be zero ie an IP address cannot start with 0); example 172.10.1.2. The IP address can be divided into a network part (N.N) and a host part (H.H) . The network part identify the network type and the host part identify the individual hosts in that network. Depending on the value of the network part the network can be classified as A, B, C and D type network as shown below:
-ipv4 network type & octets:
        - class A        - 0 - 126
        - class B        - 128 - 192.167                [ 127 reserved for localhost ]
        - class C        - 192.169 - 223                [ 192.168 reserved for private n/w ]
        - class D        - 224 - 255                       [ reserved, not used ]
-private IP reservations [ ie IP addresses not accessible on the Internet (eg personal home network) as opposed to public IP which are publicly accessible on the Internet ]
        - 10.0.0.0   
        - 172.16 - 172.31
        - 192.168

Closely related to the IP address of a host is its Subnetwork mask or subnet mask or simply the netmask.
-netmask    -  is a special representation of the ip address with n/w part as 255 and host part as 0.
                  -  it tells the ip stack, what part is the n/w addr and what part is the host.
                  -  the nic uses netmask to decide if the destn of the pkt belongs in this subnet or needs to be routed.

-netmask    with Binary AND to    n/w addr    =    ip addr
-netmask    with Binary AND to    ip    addr    =    n/w addr
   
-network addr / numb    =>    network addr / number of bits in the netmask
                    =>    network addr / 32 - number of hosts possible in that network.
-note:    In the more recent Classless Internet Domain Routing (CIDR) IP nomenclature, netmasks don't need to fall on class boundaries. This increases the available IP addresses for the Internet and thereby making more effective usage of the available 4 billion plus IP addresses in ipv4.

Linux HowTo: TCP/IP Basics

TCP/IP or Transmission Control Protocol/Internet Protocol is a suite of Network protocols. TCP/IP is the network protocol behind the Internet.The main protocols in the TCP/IP are (rather obviously) TCP & IP. Other than these two dominant constituents of the TCP/IP suite another example is ICMP (Internet Control Message Protocol),  a protocol for hosts to talk to each other at network layer; it helps in checking latency, eg in ping; it is not meant to be used by users, so no ports, error check etc;


The TCP/IP suite is based on a layered architecture, defined in RFC 1122 which has 4 layers  - Data-link, Internet, Transport and Application. It was modeled before the 7 layered ISO/OSI stack.
Note: RFC = Request For Comments. Technical Documents published by IETF that defines how the Internet is structured and functions.

In the rest of the document, I will be using small letters for sake of easy typing - eg: tcp/ip instead of TCP/IP.

Before we begin, lets review some networking jargon:

-packets    -    The smallest unit of data that networks deal with. it some control info + payload data.
-frame        -    header + packet.
-tcp/ip and network layer mappings:
        . layer 1    - physical    - copper wire
        . layer 2    - datalink    - ethernet
        . layer 3    - internet    - ip routing
        . layer 4    - transmit    - order of pkts, retransmit if reqd
        . layer 5-7    - session, presenation, application        - ssl, pop, imap, http, vpn etc.

-each layer of the tcp/ip (or the osi) stack adds its own header to the data-packet.
-an ip packet cant be more than 65535 bytes in version 4 of ip (ipv4)
-ipv6    = IPng    =    IP next generation    = it is backward compatible with ipv4
-ipv6    = 128 bits    = 10 to the 38 addresses 

-mtu            =    max transmission unit    =    the largest packet that can be sent bet two hosts
-eg:    ethernet mtu is 1500 bytes.
-fragmentation    =    breaking up of ip packet when it is bigger than the mtu.
-eg:    if the ip pkt is 4000 byte & ethernet mtu is 1500 bytes, the the ip pkt will be fragmented into 3 pkts of 1500, 1500 & 1000 bytes resp.
-ttl    (time to live)    =    a field in the ip header whose value is 0-255, signifying the amt of secs the pkt is allowed to live on the n/w bef being dropped.
-ttl can be determined only by routers (ie layer 3) and not switches (layer 2).

 
Note: tcp/ip is the software piece of the network that runs on a networking hardware like Ethernet, Token Ring, etc. Since Ethernet is the most common form of networking, lets talk more of it. An ethernet packet has a header and a payload.
-ethernet header comprises:
-source ethernet addr, dest ethernet addr & pkt protocol type
-note:     ethernet addr   =>   mac addr    - 48 bit or 6 bytes long    - unique id of every nic
-there are two types of ethernet protocols    -    802.3    (older)    & ethernet-2
-the ethernet header content helps tell the diff bet the two. 802.3 ethernets are rare these days.
-to analyze the ethernet packet headers, the tcpdump cmd can be used which needs to be run as root.

Monday, September 27, 2010

Linux HowTo: Routing Basics

What is a Router?

In the simplest terms, a Router is a network device sitting between two networks and permitting communication between them. It is sometimes also referred to as a 'gateway' although a gateway is really a special router that does routing + network address translation.
     ie gateway = router + nat

-note:      typically during a network communication, sending hosts dont know their destn, they just know the nearest router (which they find out from their routing table).
-routing is basically the act of 'ip forwarding' [ not to be confused this with port forwarding, which is ssh tunnelling ]

-'route' cmd is used to define routes for a host:
-eg:    # route  add    -net    default  gw    192.168.1.1    dev    eth0            [ set the default route for eth0 ]
          # route  del    192.168.2.50                            [ delete route for 192.168.2.50 ]

-The routing table for a host can be displayed using one of the foll cmds:
        -route
        -netstat
        -ip route
eg:    # route  -n                      [ doesnt try to do hostname resolution ie shows numeric IP addr ]
        # netstat -nr                    [ r=route, n=no hostname resolution ie shows numeric IP addr ]
        # ip route show table main        [ shows the main table, linux can have multiple route tables ]

-Linux can act as a full featured router. Many commercial routers run the linux kernel.
-A standard pc with afew network cards can act as a basic router.
-To set routing (ie ip forwarding on), do this:
        # echo "1"    >    /proc/sys/net/ipv4/ip_forward

Linux HowTo: Apache Web server Basics

Apache is by far the most popular web server out there. Its market share is more than the combined market shares of all competing web servers. It would not be long fetched to say that Apache web server 'drives' the Internet. The best part about Apache and its web dominance is that it is licensed as an Open source software under Apache Software License by Apache software Foundation. The current version of Apache is 2.0.

Here is a brief summary of how Apache (or more broadly an http web server) works:
-http working:
        . web client requests web server's tcp port 80 for connection.
        . web server allows web client to connect.
        . web client issues http cmds on web server.

-the default http web server port is 80.
-to run multiple web servers on the same node, each web server needs its own distinct port.

-apache http web server starts as root to complete initial network config ie binding to port 80 for 'listening'.
-once initial network work is done, apache 'gives up' the root permissions and becomes a non privileged user--eg www, nobody, apache, daemon etc..
-the idea is that by limiting the permissions security is increased. many security problems are ascribed to poorly written cgi (perl?) scripts.
-a lot of the flexibility & power of apache comes from the numerous modules it has eg mod_perl, mod_cgi, mod_ssl etc.
-by default the apache config file, httpd.conf expects the web server to run as the user daemon.

-to install apache on Red Hat Linux :
        # yum -y install httpd

-useful apache cmds:
        # service httpd    start | stop | restart | status
        . http://localhost    or    http://[::1]/                -    to verify apache working ok
        . /etc/httpd                                                  -    server root dir
        . /var/www/html                                          -    doc root, /usr/local/httpd/htdocs if apache installed from src rpm.
        . /etc/httpd/conf/httpd.conf                          -    apache config file

-virtual host
        . a single box hosts multiple web servers listening at different ports.
        . represented by <virtualhost> </virtualhost>    tags.
        . inside the above tags, the servername option denotes the virtual web server name (& it should be valid and resolvable by /etc/hosts or dns).

-apache log dir
        . /var/log/httpd     dir
        . inside this the two files to look for are access_log and error_log

Linux HowTo: Inodes, Links and Device Files

What is an inode?

An inode is a system generated unique number assigned to a file. It stores info about a file location on disk and its attributes. So every file in the system has an inode. Usually, every filesystem has a upper-limit to the number of inodes it can generate. This means, there is an upper-limit to how many distinct files can be created on that file system. Therefore it is advisable to set it to a reasonably high number to avoid running out of inodes on your filesystem (and thereby be prevented from creating new files). Also, regular deletion of old and redundant files to keep the inode usage under check (eg: regular deletion of old log files is a good starting point). More about 'inodes' here.

The simplest way to get the inode number for a file is via the command 'ls -i'
Note:    from experiment, mv preserves inode number, cp creates new inode

What is a Link?

A Link is another name for a file. Its simply another address pointer for a file on disk. There are two types of links possible - hard links or soft links. Links are created using the command 'ln' .

-hard link    -    two files with same inode; their names may be different but they point to same disk location. cant span fs (as different fs have separate inodes). Hard links are created by default using 'ln' command.
eg: # ln fileA fileB             -- creates fileB as a hard link to fileA

-soft link    -    two files that refer by name instead of inode. ie their inode values are different. they can span fs. Also called as symbolic link or symlink. Soft links are created by using 'ln -s' command. They show 'l' as the first character in ls -l output.
eg: # ln -s fileA fileB         -- creates fileB as a soft link to file A

        [Mnemonic: hard same inode = hsi ]

What are device driver files?
-Device driver files are special files that allow to interface with devices in blocks of data.        
-Their first char is 'b' in ls -l      (eg ls -l /dev/sda1)
- block device files have two numbers - major and minor.
            -major block device file number    -    points to the driver
            -minor block device file number    -    points to the interface.
-eg:    if /dev/sda1 and /dev/sda2 have different interface ports but share the same driver, then they will have same major number but diff minor number.

Other device driver files:
-character device files:
       . device driver files to interface with devices as one char of data at a time. Their first char is 'c' in ls -l    (eg ls -l /dev/tty1)   
       . they too have a major and minor number like block files.

Note:    from man pages, mknod cmd is used to create block or char device driver files.

Sunday, September 26, 2010

Linux HowTo: Secured Shell Tunneling - Poor Man's VPN

-ssh Tunneling is also called port forwarding.
-ssh Tunneling = port forwarding  = poor man's vpn = a way to forward otherwise insecure tcp traffic through ssh.

-Utility of ssh Tunneling    = allows users to access securely their company data while remote (home, internet, etc)
-as long as the user has an ip conn to the Internet, he can connect to the remote server securely.

-ssh with -L option allows to tunnel ssh connection.
-using one hostA to connect securely to another hostB (ie via hostA)
            clientA# ssh -L local_port:hostA:dest_port   hostB
 ie:       clientA-----hostA====hostB
 ie:       user on clientA authenticates on hostA but securely connects to hostB.
 ie:       it is a way for people inside a firewall or proxy to bypass the firewall restrictions and get to the computers in the outside world.

Additional notes:

-ssh with -X option is a type of ssh tunneling. This makes use of ssh to Tunnel X Windows remotely -- note that X is an insecure protocol.
-Default port for X is 6000. If this port is blocked, a workaround is to run ssh with -X option to display X output.
-Example:  User mrinal wants to connect from Local node A running X server (and ssh Client) to node B running X client and ssh Server)
   On Node A:
     $ ssh -X mrinal@nodeB              -- user mrinal starts ssh Tunnel between node A & node B

-ssh with a lowercase -x disables the ssh tunneling and is not supposed to be used.

Linux HowTo: Secured Shell (ssh) Configuration

Earlier we had spoken about the Basics of ssh in this previous post. Today we will go a little more in depth. Ssh is quite vast and can fill a book.
Ssh is based on the public-private key cryptography that was developed to fill in the gaps left by earlier protocols like Telnet and rsh (Remote Login Shell). ssh is safe even if someone is using packet sniffer (like tcpdump). It prevents impersonation because it keeps a track of ssh clients on the ssh server.

-cryptographic basics:
    .everyone has a public key which is world accessible
    .no one has access to others private key. only the owner knows it.
    .a msg locked with joe's private key is unlocked only by joe's public  key.
    .a msg locked with joe's public  key is unlocked only by joe's private key.

-cryptography can ensure:
    . only desired recipient receives the msg
    . recipient can authenticate the id of sender.

-system wide ssh config files:
    . /etc/ssh/ssh_config        - ssh client  config file
    . /etc/ssh/sshd_config        - ssh server config file
-client config can be overriden by:
    1. command line options
    2. user config file in $HOME/.ssh/config

-by default, all users and all groups are allowed ssh access to a host.
-by default, root ssh login is allowed.
-rsa and dsa are just two algorithms for ssh cryptography.
-the foll just creates a pair of private and public key as per rsa algorithm:
    # ssh-keygen -t rsa            [ useful when wanting to ssh w/o pw ]

-To verify ssh pkgs:
    # rpm    -qa  | grep -i ssh
-To verify ssh version:
    # ssh    -V
-To start sshd:
    # service    sshd    start        or
    # /etc/rc.d/init.d/sshd  start
-Note:   
-To control services at various levels the foll are similar:
    # chkconfig --level nnn <service>   
    # ntsysv --level nnn
    # system-config-services

-Note:
-The default sshd port is 22 & it can be changed in /etc/ssh/sshd_config followed by "service sshd restart"
-To permit root to ssh, edit /etc/ssh/sshd_config as follows:
    PermitRootLogin    =    yes            [ This could be a security risk ]

-To generate ssh host key:
    # ssh-keygen    -t <type>    -f <file>        [ Type can be rsa or dsa ]
-This creates a private, public host key pair.

-The ssh host key is like a master key that is used to encrypt and decrypt ALL data to & from a host.
-This prevents 'man-in-the-middle' attacks.
-The ssh-keygen cmd creates a private-public host key pairs in the files specified by -f option.
-Lets say the cmd given was:
    # ssh-keygen    -t rsa    -f    /etc/ssh/ssh_host_key
    -/etc/ssh/ssh_host_key        -contains both private and public keys
    -/etc/ssh/ssh_host_key.pub    -contains only public key.
-The public key is used to encrypt the data &
-The private key is used to decrypt the data.
    [Mnemonic = pub-en ]
-The passphrase is used to access the private key.

-The first time a new host is ssh-ed to, its fingerprint is added into the source machines "known_hosts" file.
-The new host fingerprint can be verified by as follows:
    node1#    cat     /home/.ssh/known_host    | grep -i node2
& on     node2#    ssh-keygen -y -f /etc/ssh/ssh_host_key
The output of the two cmds above should be the same.

-sometimes the fingerprint of a remote host may get changed eg after an ip address change & in that case
an attempt to connect to the remote host is errored out. In that case, the fingerprint for the remote host needs
to be deleted in the "known_hosts" file and the ssh retried; this regenerates the fingerprint of the remote host.

-Useful cmds:
    node1# ssh-keygen    -l -f /etc/ssh/ssh_host_key        [ shows fingerprint of node 1]
    node1# ssh-keygen    -y -f /etc/ssh/ssh_host_key        [ shows host key of node 1 ]
   
    node1# ssh -p 52 user@node2        [ ssh to node2 as user at non-default port 52 ]
    node1# ssh -X user@node2             [ ssh to node2 as user and run X client on node2 ]
    nose1# ssh -vvv user@node2          [ ssh to node2 as user and run debug ]
Then review /var/log/secure & /var/log/messages.

Friday, September 24, 2010

NASA Map of Particulate Matter

NASA released the report on particulate matter content in the atmosphere for the whole world few days ago. It can be seen here on this page .
It shows that the Asian and African regions are the most polluted with particulate matter. Some are dangerously high and are cited as the cause of millions of premature deaths world over annually. I think this has been due for a long time to have a report like this. An eye opener for many in the developing world. In the race to achieve Economic Development and prosperity many Asian countries are neglecting their footprint on the Environment. Rampant deforestation, Random Industrialization and Excessive Population are causing an unimaginable stress on the Environment. I think those regions in the Red must take a look at how they want to develop. Fast and short prosperity or Long and sustainable?

In my opinion, the deadly mix of uncontrolled population and deforestation causes a cascading effect on the natural resources of a country and after a point the resulting pollution problem becomes out of hand. The best way to tackle this in my opinion is through Education. Education to people about using Renewable resources more than fossil resources. Education to children to develop healthy habits of Recycling. It applies to every one too.

Also note that a bulk of the Natural resources lost from the developing world ends up in the developed parts. So the developed world is indirectly responsible for the depletion of Natural resources in the developing world. These two ends of the spectrum need to meet in the middle. Otherwise more scary reports are on the way.

This is our only livable home. Please recycle as much as you can.
Again here is the report . See for yourself.

Linux HowTo: NFS (Network File System)

Network File System or NFS was developed by Sun Microsystems in the early 1980s for their Solaris Operating System. Over the years, it has been adopted into most Linux / Unix OS of the day. The basic idea behind NFS is to be able to locally mount file systems residing on a remote system. In this case the local system acts as a client and the remote system acts as a server. NFS is a stateless and unencrypted protocol, meaning that if the server crashes, the client has no way of knowing it, plus the data transfer is unencrypted so if a cracker were to sniff the traffic, they would be able to 'see' the contents of data packets (this could raise some alarms in the Network Security Administrator in your company. More NFS details below..
===

-in nfs, the client server communication happens via rpc (remote procedure call).
-portmap  -    is the rpc service manager. Whenever a service wants to make itself available on the nfs server, it needs to register itself with portmap.
-portmap tells the client, where the service is located on the server.
-current versions of nfs are 2, 3 & 4. version 3 being the most common and widely used.

-nfs can be a kernel builtin or it can be a standalone nfs daemon.
-the default seems to be the standalone nfs daemon.
-On the server side, the primary nfs config file is /etc/exports; and its format is:
        <dir>    <client(permissions)>            [ there could be multiple clients ]
-to export /etc/exports:
        # exportfs  <option>
-eg:    # exportfs            -a=export all, -r=reexport, -o=options like ro, rw, no_root_squash (default is root_squash)etc

-On the client side, to mount an nfs fs from  remote system:
        # mount -o     <options>    server:/dir        options like ro, rw, soft, hard, bg etc...
-to see current mounts:
        # showmount  -e


NFS Configuration settings
-hard     mount    -    client waits indefinitely
-soft      mount    -    client will timeout eventually
-nfs intr     -    nfs interrupt option     - enables processes to interrupt and move on if nfs is not responding.

-default block size in nfs:
        - version 2,3     -    1 KB
        - version 4        -    4 KB
-the above can be tuned using wsize & rsize params        (write and read).
-eg:    in /etc/exports:
            serverA:/home    /mnt/home    nfsvers=3,rw,bg,rsize=8192,rsize=8192

Linux HowTo: File system Organization on Disk

Today, discussing some File system related concepts - inodes, Device Files, Superblock, Journaling etc.
Lets get started..
===

What is an inode?
-inode = stores info about a file location on disk and its attributes.
-an inode points to either:
    -another inode    or
    -a data block
-inode contains:
    -file owner
    -permission
    -size of file
    -creation time
    -last access time
    -group id
-Note:    inode does not have file name    -    this is so as to permit an inode to point to multiple inodes.

-hardlink    -    two files with same inode; their names may be different but they point to same disk location. cant span fs (as different fs have separate inodes)
-softlink    -    two files that refer by name instead of inode. ie their inode values are different. they can span fs.
    [ To remember, hard same inode = hsi ]

What are Device Files?
-Device files are special files for organizing data in Linux (and Unix) OS.
-block device files:
-device driver files to interface with devices in blocks of data.          Their first char is 'b' in ls -l      (eg ls -l /dev/sda1)
-block device files have two numbers - major and minor.
    -major block device file number    -    points to the driver
    -minor block device file number    -    points to the interface.
-eg:    if /dev/sda1 and /dev/sda2 have different interface ports but share the same driver, then they will have same major number but diff minor number.
-character device files:
    - device driver files to interface with devices as one char of data at a time. Their first char is 'c' in ls -l    (eg ls -l /dev/tty1)   
    - they too have a major and minor number like block files.
Note:    from man pages, mknod cmd is used to create block or char device driver files.
Note:    from experiment, mv preserves inode number, cp creates new inode.

-sync    = writes disk cache to be written to disk;

What is a Superblock?
-superblock   
    -the first piece of info read from disk.
    -it hold info about:
    -location of first inode
    -amount of space
    -disk attributes
-without a superblock, data on disk is useless. that is why multiple copies of superblock are maintained.

Why is Journaling of FS important?
-ext3 is enhanced ext2 with journaling.
-journaling helps keep track of bad blocks. so fsck does not have to be run after every crash.
-the journal keeps track of changes like a redo log. it writes only when commit happens.
-so data is cleanly written or not written at all--thus avoiding corruptions during crash--this increases integrity and avoids fsck to be run. this saves time.

Thursday, September 23, 2010

Facebook donates $100 Million

Ok, Todays hot news. Right on the heels of Facebook based movie 'The Social Network', which is reported to show the lives of the facebook founders and their rivalries with the Twin brothers who blame each other for stealing the idea of the Social Networking site while in the Harvard dorms, there is this news of Facebook deciding to donate $100 million to a school district in Newark NJ.
Well, while this is really a good and generous effort on the part of the facebook org, I wonder why this is the opportune time for the donation. Did they suddenly become magnanimous? Did they suddenly got interested in doing something good about the crippling Education system in NJ? And even if so, is this really enough?
The answer is, somewhere out there but the big possibility in my opinion, this is only too little too late and a farce.
If you have gone through the reviews for the movie and also been following the news lately, you will notice that in reality also the founder of Facebook (Mark Zuckerberg) was sued by his own dorm-mates Twin brothers for having stolen their idea of social networking and branded it as his own. Later on that suit was settled out of court for $65 million. If that is the case then the $100 million donation to the school district just seems like a cheap bribe on part of Facebook to silence the bad criticism that has flown about the site. Bad. And what is $100 million for a company who is valued at atleast $6 Billion? Hmm..not much I suppose. Hey I'm not saying this side is bad that is not. All I'm saying is there is no smoke without fire.
BTW, on the side, why is the network site down Two days in a row? Is that also bad luck or bad design?

Linux HowTo: PAM (Pluggable Authentication Module)

GNU/Linux or simply Linux is the most secure OS out there. It has a host of security features like SELinux, inbuilt firewall, PAM etc. In this post I talk of PAM (Pluggable Authentication Module). PAM is so secure that if you want you can prevent even root user to login to the system. Security is dear to many System Administration, we of course recommend a little moderation..

- PAM
- Pluggable Auth Module
- a security layer that takes on the task of authentication on behalf of apps instead of apps having to do so themselves.
- each application has its own pam config file. if a specific config is not there, a default file is still there.

So how does PAM Magic work?

- Well, when programs need to authenticate someone, they call one of the functions in pam library.
- pam then checks the config file for that application.  if a specific config is not there, a default file is still there (/etc/pam.d/other)
- the config file tells the pam library module what checks to perform.
- the checks performed by the library module may be as simple as checking /etc/passwd or more complex as checking with an ldap server.
- the config files exist in             /etc/pam.d
- The library modules exist in         /lib/security.
                app -> config -> library module    <->    user

-Each line in a pam config file is evaluated line by line. Each line returns a success or failure flag. The summary of the flags is returned to the app.

-Config file format:
        - col 1        module_type        - auth, account, session, password       
                [auth ask for passw; account=account attribs( egtty type); session=env settings, logging password points to the module to change passw]
        - col 2        control_flag            - required, requisite, sufficient, optional
        - col 3        module_path        - actual path of the library
        - col 4        arguments            - optional, has values like debug, no_warn, use_first_pass etc...

- recommended to leave the default config file /etc/pam.d/other as it is (it is very restrictive by nature).
- To fix pam errors, you can log into single user mode.
- a good place to look for PAM errors is /var/log/messages.

Wednesday, September 22, 2010

Linux HowTo : Disk Partitions and Boot Loaders

Linux Notes on Disk Partitions and Boot Loaders.
In this Post, we will be talking of Two predominant Boot Loaders for Linux - GRUB (GRand Unified Bootloader) and LILO (LInux LOader):

First the basics about Linux Partitions:
-disk partition names:
    . /dev/hdx        - ide
    . /dev/sdx        - scsi / sata
-sector = 512 bytes
-track   = sum (sectors)      in one read of disk arm
-cylind = sum (tracks)        in one read of disk arm

-partition types:
    . primary        - one of the 4 partitions limited by the master boot record (mbr); mbr resides in the 1st sector of the disk (ie first 512 bytes).
    . extended      - one of the primary partitions that is logically broken to create more than 4 partitions.
    . logical          - one of constituents of the extended partition.

-the boot partition must be a primary partition and reside completely in the first 1024 cylinders;
-this is because the bios can't read or boot from the boot partition, if this condition is not met.
-usually 100 MB for boot partition is ok.

-partition recommendations:
    . first define boot
    . then define swap
    . then define /usr, /opt, /var in a single large partition - perhaps /  ?
    . after that define rest of the system like /home etc.

MBR = Master Boot Record.

-MBR (or simply mbr) lives in the first sector of the first primary partition. the mbr contains the partition table, info about the partitions in the system.
-Since a sector = 512 bytes, mbr = 512 bytes & in turn partition table = 512 bytes.
-every media (disk, floppy, cd) contains an executable code in the mbr even if the code is only to put a message "Non-bootable disk in drive A:".
-this is the code that is loaded by bios during the bootstrap. this is called 'stage1 boot loader'.
-this code from mbr / stage1 boot loader (ie first sector) looks for active primary partition and loads the first few blocks of that partition into RAM.
-these few blocks from active primary partition comprise 'stage 2 boot loader'.
-stage 1 + stage 2    =    boot strapping.
-the above works fine if there is only one os in the system. but if there are multiple os, then another piece of code called boot-loader is needed.
-the boot-loader allows the user to select one of the os to boot, ie choose which set of first os-disk-blocks to load into ram.
-note:    even if a system can have 4 primary partitions, it can still have more than 4 bootable os partitions; this is possible bec of boot-loaders.
-eg of boot loaders = grub, lilo, bootmagic.
-bootloader lives in an os partition and is invoked by the mbr.    [[ (mbr.exe) ] -->    (bootloader.exe)    -->    (rest of os partition)    ]
-why is grub considered Better than lilo?   
    Because when changes are made to the system (new os, new kernel) lilo boot-setup needs to be recreated from the cmd line whereas for
    grub only the grub.conf file needs to be re-edited.

Ok, now that we have covered partitions and basics of boot loaders, heres the specifics of Lilo and Grub.

-Lilo    can be installed in the
    . MBR                     or
    . the partition boot record of a partition    or
    . on removable media (floppy, cd, usb key)
-lilo    config file is /etc/lilo.conf

-Grub    can be installed in the
    . MBR                        or
    . the partition boot record of a partition    or
    . on removable media (floppy, cd, usb key)
-grub    config file is /boot/grub/grub.conf
-grub     cmd    /sbin/grub    or    /usr/sbin/grub    is a small but powerful shell that supports several grub cmds.
-grub.conf is generated by anaconda, the linux installer.

-In the grub.conf file    :
    . all counting in grub.conf starts with 0.    eg default=2    => 3rd stanza.
    . splashimage    = the background image for the grub boot menu.
    . root            = partition that will be booted (ie /boot partition).   
        eg:    root (hd0, 6)    =>    /dev/hda7    = /boot partition.
            root (hd1, 10)=>    /dev/hdb11    = /boot partition.
            root (hd2, 7)    =>    /dev/hdc8    = /boot partition.
    . initrd    => initial RAM disk    => the disk partition that contains modules needed by kernel before file systems can be mounted.
   
-To install grub to a removable disk use the 'grub-install' cmd
-eg: for floppy disk:
    # grub-install        /dev/fd0
-note:    this loads the stage 1 boot loader to the first sector of the floppy disk which loads stage2 boot loader (which lives on the hard disk)
-stage1 bootloader on floppy will still show empty when mounted as the first sector does not show up in the filesystem.
-stage1 bootloader only has a list of block addresses for stage2 bootloader.
-So if a partition address changes, grub needs to be reconfigured in order for stage1 to locate stage2 bootloader.

-anyone having access to the grub cmd line also has access to files on the filesystems without the restrictions of file / owner permissions.

-the habit of creating a boot floppy or usb disk is good because it can help in case the mbr gets overwritten by another os install.
-even if the boot floppy or usb disk are not available, then linux install disk can be used to go in recovery mode and then mbr reinstalled.
-eg:    # chroot    /mnt/sysimage            [ on the recovery window, to make /mnt/sysimage as root mount directory ]
    # grub-install                        [ reinstalls mbr ]
-remember:
    [[ mbr    = 1st sector = stage1 boot loader ]]
                |->    stage2 bootloader partition 1                -> grub menu option 1
                |->    stage2 bootloader partition 2                -> grub menu option 2
                |->    stage2 bootloader partition 3                -> grub menu option 3
                . . .                                    . . .

Linux HowTo: E-Mail Configuration

Here is some help on Email configuration on Linux:
=====

-Electronic Mail works on the basis of SMTP.
-SMTP    -  Simple Mail Transfer Protocol;    the standard for mail transport on the Internet.
-it only defines how mail is to be sent from one host to another. it doesnt define how the mail is to be displayed.
-it is platform independent, protocol independent and simple.
-smtp port is 25. smtp prereq is that sending host be able to send ascii text to receiving host.
-basic transfer mechanism used by all mail software (hidden behind a nice looking gui)
        . telnet mailserver port                -client connects to mailserver
        . helo     clientname                    -client introduces itself to mailserver
        . mail from:    sender@dom.com            -sender email (on client)
        . rcpt  to:    receivr@dom.com            -receiver email
        . data:    bla bla                     -mail text
        . <empty line>                    -an empty line
        . <.>                            -a period
        . <empty line>                    -an empty line
        . quit                            -end of conn

-sendmail    -    is a mail server
-postfix    -    is a mail server         with focus on security but simpler than sendmail
-three components of mail service:
        . mua    -  mail user  agent    -  what the user sees    -eg evolution, eudora, outlook,
                    -  only for reading / writing mail
        . mta     -  mail trnsfr agent    -  transfer bet client-srvr    -eg sendmail, postfix (basically smtp servers).
        . mda    -  mail deliv agent    -  puts mail in mailbox    -eg /bin/mail, procmail, exchange server (which does both mda + mta). postfix does only mta.
-useful cmds:
        # yum -y install postfix
        # chkconfig postfix on                    - similarly for sendmail
        # service postfix    start | stop | status | restart        - similarly for sendmail
-postfix config file is        /etc/postfix/main.cf
-postfix process config file    /etc/postfix/master.cf
-To check the postfix config:
        # postfix  check                        - checks main.cf
-usefuls:
        # mailq                - checks mail queue
        # /etc/aliases            - email alias list
        # /var/log/maillog            - config file
-mda serves the mua. the mda procmail serves emails to mua (like evolution) in mbox format.
-mbox        -    a simple text mail format.
-this separation of mda & mua via mbox format is useful in case of offline or remote usage (eg laptops)
-POP = Post Office Protocol:
-idea behind pop:
        . a central mail server manages mail
        . mails queue on the server until clients connect. on the server, the mail format can be mbox, etc
        . clients connect and download mail via pop
-IMAP=Internet Message Access Protocol       
-idea behind imap:
        . imap was developed to fill in some gaps in pop (at univ of washington)
        . eg:    imap allows you to keep a master-copy of mail on the server and download a copy on the client.
-Usual Ports for Mail services:
-ports are defined in /etc/services file
        . pop        110
        . imap        143
-install:
        # yum -y install uw-imap                (uw    =  univ of washington)
-generally imap and pop run under xinetd
        . /etc/xinetd.d/imap
        . /etc/xinetd.d/ipop3
-checking pop and imap:
        . pop    -            # telnet    localhost    110
            user     <username>
            pass     <passwd>
            …
            quit
        . imap    -            # telnet    localhost    143
            login    <username>    <passwd>
            …
            logout
-in their original form, pop or imap dont have encryption. To do encryption, you need ssl.
-when using ssl, the ports are    (specified in /etc/xinetd.d    files?)
        . pop        995                (pop3s    actually)
        . imap        993                (imaps    actually)
-useful log files for mail are:
        . /var/log/messages
        . /var/log/maillog

Linux HowTo: The Linux Gui - X Windows System

What is X ?

X is the GUI for graphical bitmap displays and is very commonly seen on Linux OS. X originated at MIT in 1984 as part of project Athena which provided computing env using dissimilar hardware. The name 'X' is a pun on the name of its predecessor called the 'W' Windows that was developed in Stanford. This project was started by a MIT member Bob Scheifler. In terms of functionality, Linux Operating System is more configurable than Microsoft's Windows Operating System. Eg, in Microsoft Windows you can't turn off the gui where as in Linux the gui (X) is just another process and therefore can be started and stopped as you like.

Here are some detailed notes about X for review:
-X Window System    = X11    = X    = a window system for graphical bitmap displays.
-X separates display functions into a display server and clients.
-X is network transparent.
-X's way of treating client and server is opposite to the traditional way of treating client and server.
-X outputs display and inputs keyboard, mice, touchscreen input.
-X provides a toolkit for GUI but does not specify any user interface. That choice is left to the user. The common choices are Gnome or KDE.
-X is currently maintained by a non-profit org called X.org.

-X server is governed by /etc/X11/xorg.conf

-When the following cmd is run, it produces a temporary file called "/root/xorg.conf.new"
        # Xorg -configure
-To test the above file, run:
        # X-config    /root/xorg.conf.new
-If the X server runs fine, then copy the file to /etc/X11/xorg.conf
-If the following cmd returns a "Fatal server error" then remove the file in the error and retry.

-Good Practices:
        -save a copy of the /etc/X11/xorg.conf file before overwriting it.
        -for troubleshooting, look into /var/log/messages and /var/log/xorg.0.log
-Three ways to start X:
        - run    "X &"     or     "startx"    cmd manually
        - run     init 5    or     telinit 5
        - edit    /etc/inittab and reboot
-Three ways to stop X:
        - hit    ctrl + alt + backspace
        - run    init 3    or    telinit 3
        - edit    /etc/inittab and reboot
-Three ways to modify X behavior:
        - edit     /etc/X11/xorg.conf                [not recommended]
        - run    "system-config-display"
        - run    "Xorg -configure"

-To see the details for X server, run the following cmd:
        # xdpyinfo

-Note:   
-running X can pose security risk. It can allow another client to access and observe your keystrokes.
-to avoid the above risk:
     -use access control using xhost        or
     -tunnel X over ssh
-X cmd is a symbolic link to Xorg cmd.
-it is good to verify that X Font server is running while X server is running. A simple way to do so:
            # service xfs    status

-Difference between    "startx"    and    "X"
        -startx        starts    X    and also launches the default desktop manager (GNOME).

-When exporting the env var DISPLAY  to redirect X output, its set it to the machine where the X server is running.
        # DISPLAY=<X server Addr>:0
-Usually this is done from the X client side.-To check if the X client can send its output to remote X servers, run xhost cmd on the X client without parameters.
-Then "xhost +" can be issued to add X servers.
-Default port for X is 6000. If this port is blocked, a workaround is to run ssh with -X option to display X output.
    -ssh with -X option is a type of ssh tunneling.
    -ssh with a lowercase -x disables the ssh tunneling and is not supposed to be used.
-To troubleshoot ssh:
        - use        -v or -vvv option
        - review    /var/log/secure, /var/log/messages
-To switch between desktops edit the foll file:
        - /etc/sysconfig/desktop file            [which and whatis switchdesk returned null]
-xterm is the standard terminal emulator that runs in X.
Also,
-VNC is a remote desktop sharing system under GPL. However, it is not the same as X.
-For instance, the vnc server and client (called vnc client) are opposite to X server and client.
Learn more about VNC and Linux networking here --> http://timedigit.blogspot.com/2010/09/networking-concepts-linux.html

Tuesday, September 21, 2010

Linux HowTo: Secured Shell (ssh) for Starters

Secured Shell (ssh) for Starters:


-ssh uses the technology of public-key-cryptography as the base.
-it requires two keys to open a file (public + private); somewhat like a bank locker which req two keys (bank's + user's)
      -public   key is freely accessible.
      -private  key is strictly restricted.
      -The combination of public + private key is supposed to be unique.

-how it works?
            . both receiver and sender must have access to each others public key
            . sender encrypts:   sender priv key + receiver pub key + data
            . sender sends
            . receiver decrypts: sender pub key + receiver priv key + data

-ssh is a proprietary protocol owned by the Finnish company ssh communications security.
-although the source code for original ssh is open, varios restrictions are imposed about its use and distribution.
-openssh is the opensource version of ssh under the openbsd project and is more popular and secure than the original ssh.
-to be fully secure, all insecure connections in a network need to be eliminated.
-eg: host a connects to host b via telnet; host b connects to host c via ssh.
            due to the insecure a-b conn, the traffic bet b-c can be monitored and cracked.

-usefuls (on Red Hat Linux):
            # yum -y install openssh-server
            # rpm -qa | grep -i openssh
            # service sshd start | stop | status
            # ssh -6 user@server         [ ipv6 ]
            . /etc/ssh/sshd_config         [ server daemon ]
            . /etc/ssh/ssh_config           [ client   daemon ]
            . ~/.ssh/known_hosts         [ a directory of ssh hosts ]

Linux HowTo: Networking Concepts

Here is a little dabbler to Linux Network Concepts for those of you wannabe Nerds out there..
=====

-VPN    -a n/w that uses a public telecom n/w like the Internet to provide remote network access.
    -the goal of vpn is to provide same level of security as a private n/w at a fraction of the cost.
    -vpns came in vogue in around 2k when leased lines were the only option at a high cost.
    -vpns actually spelled the end of leased lines.
    -vpns provide security by encapsulating the traffic between the two nodes in cryptographic tunnels.
    -vpns use several protocols for providing security - eg ssh, ipsec (ip security), ssl etc.

-Tunneling protocol-
    -a n/w protocol that encapsulates payload of another n/w protocol.
    -this is routinely used in vpn.
    -tunneling usually has two protocols operating - the 'delivery protocol' that encapsulates the 'payload protocol'
-eg:    -delivery protocol = ssh, payload protocol = smb; ssh + smb = ssh tunneling protocol.

-Port Forwarding-
    -Also called Port Mapping.
    -Changing of the destn addr and/or port on a packet.
    -this permits public hosts (eg on the Internet) to connect to a specific host within a private lan.
-scenarios of Port Forwarding:
    -running a public http server within a private lan at port 80
    -permitting ssh access to hosts on the private lan from the Internet at port 22
    -permitting ftp access to hosts on the private lan from the Internet at port 21.
-Port Forwarding is achieved by-
    -iptables cmd     in linux
   
-Note:    -In a typical home lan via a router, the Internet sees only the router which holds the public ip addr.
    -the hosts behind the router are invisible to the Internet.
    -Port forwarding on the router permits communications by external hosts with services provided within a private lan

-PAT    -Port Addr Translation-
    -Permits communication between hosts on a private n/w and hosts on a public n/w.
    -It allows a single IP addr to be used by many hosts on a private n/w.   
    -PAT device (usually router) transparently modifies IP packets as they pass through it.
    -PAT device modifies the senders IP Addr and Port number (to a public ip and port)
    -PAT is subset of NAT.
    -PAT is also known as NAT overload.
    -PAT operates on layer 3 & 4 (network, transport resp). NAT operates only on layer 3.

-Socket    -ip + port pair        (much like a telephone line and its extn).
    -the socket needs to be known by both source and target host for communication to happen.

-Note:    -ICMP packets dont have source and target port numbers (TCP and UDP do).
    -NAT Translates IP addr only. 1to1 IP translation also called Static NAT.
    -PAT Translates IP addr + port (ie socket). Also called NAT overload.
-Compare this with traditional TV networks...
-Cable TV-
    -Provides TV Broadcast in the form of Radio Freq Signals over optical fiber or coaxial cables.
    -This is different from traditional TV Broadcast via radio waves over-the-air.
    -Cable TV networks have a high bandwidth.

Linux HowTo: Automated Install on Ubuntu - KickStart, PreSeed, Diskless Clients

-ubuntu automated install is based on redhat-kickstart and debian-preseed.
-diskless client -  multiple terminals running on a remote computer.
-automated install process:
        . boot to install screen
        . point to appropriate config file
        . let install proceed on its own
-debian-preseed is more flexible but no gui. redhat-kickstart has a gui.
-kickstart install:
        # sudo apt-get install system-config-kickstart
        # system-config-kickstart        - to config the install-text file
        . onscreen-dialog: 'File > save'    - to save install-text file
        . save to an http or ftp server
        . boot from install cd, press esc, type the foll:
        . boot: install ks=http://<ip addr>/<kickstart text file>
-note:    a kickstart install file based on current config is in works for ubuntu.
-'zerombr yes'    - option in the kickstart install file to wipe out previos mbr.
-preseed install:   
        . needs pkg debconf-utils
        . ubuntu install disk has a file called 'example-preseed.txt.gz'
        . in this file most lines start with 'd-i' short for debian installer

-diskless clients - are terminals running on older h/w but are not dumb terminals.
-diskless clients - dont require hard drive, but a monitor and video card
-diskless clients - use dhcp for ip addr assignment and
              - use tftp (trivial ftp) for file share across n/w
-tftp is insecure and runs on port 69. it should be blocked beyond local n/w.
-diskless clients / tftp - based on linux terminal server project (ltsp).
-diskless clients - require ltsp-server-standalone package
        # sudo apt-get install ltsp-server-standalone

-dhcp config file- /etc/dhcp3/dhcpd.conf &
             - /etc/dhcpd.conf

GNU GPL stand on Hardware

According to FSF's Founder Richard Stallman:
=====

1-"Free software is often available for zero price, since it often costs you nothing to make your own copy. For hardware, the difference between ``free'' and ``gratis'' is more clear-cut; you can't download hardware through the net, and we don't have automatic copiers for hardware. So you must expect that making fresh a copy of some hardware will cost you, even if the hardware or design is free. The parts will cost money, and only a very good friend is likely to make circuit boards or solder wires and chips for you as a favor."

2-"Because copying hardware is so hard, the question of whether we're allowed to do it is not vitally important. I see no social imperative for free hardware designs like the imperative for free software."

3-"Circuits cannot be copylefted because they cannot be copyrighted. Definitions of circuits written in HDL (hardware definition languages) can be copylefted, but the copyleft covers only the expression of the definition, not the circuit itself. Likewise, a drawing or layout of a circuit can be copylefted, but this only covers the drawing or layout, not the circuit itself. What this means is that anyone can legally draw the same circuit topology in a different-looking way, or write a different HDL definition which produces the same circuit. Thus, the strength of copyleft when applied to circuits is limited. However, copylefting HDL definitions and printed circuit layouts may do some good nonetheless.It is probably not possible to use patents for this purpose either. Patents do not work like copyrights, and they are very expensive to obtain.

Whether or not a hardware device's internal design is free, it is absolutely vital for its interface specifications to be free.
We can't write free software to run the hardware without knowing how to operate it.
Selling a piece of hardware, and refusing to tell the customer how to use it, strikes me as unconscionable."

====
Source:

http://www.linuxtoday.com/news_story.php3?ltsn=1999-06-22-005-05-NW-LF
Richard Stallman -- On "Free Hardware"
Jun 22, 1999, 04 :27 UTC

Monday, September 20, 2010

Linux HowTo: KVM - Kernel Based Virtualization Primer

KVM on Linux - Some Useful Concepts & Notes
===

-history-  2003    -64 Bit processors for x86 introduced.
    -this eliminated the physical address space limitation of these chips.
    -this led to more powerful x86 servers that could hold lot more memory.

-concepts-
    -virtualization works on the concept of ring-levels.
    -there are 4 privileged ring levels - 0 to 3.
    -ring 0    -most privileged level with full access to h/w, usually the os kernel.
    -ring 1&2 have historically not been used in modern commercial os.
    -ring 3    -the top layer, ie the application.
    -in virtual envs, an hypervisor runs at the most privileged level 0.
    -trick of vm - fool the os to run on a higher ring level but retain funcionality
-early hypervisors like Bochs emulated x86 cpu fully in s/w - this meant poor performance.
-newer hypervisors, pioneered by vmware, called binary transalation.
    -in binary translation,hypervisor intercepts os calls and translates them in memory.
    -the guest os is unaware that it is running on a hypervisor.
    -this approach is more complex but performs better.
-paravirtualization-
    -this was pioneered by opensource Xen project.
    -it is different from both cpu-emulation and binary translation.
    -in this the guest os is modified and all privileged calls are replaced with direct calls to the hypervisor.
    -the guest os knows that it is running on a hypervisor.
    -this removes the need to emulate h/w devices like disk controllers or n/w cards.
    -this requires changes in the host os kernel.
    -this was incorporated in linux kernel 2.6.23.
    -in this host os runs in ring 0, guest os runs in ring 1, user apps run in ring 3.
-hardware assisted virtualization-
-2005    -Intel & AMD both developed extensions of x86 arch that could be used by hypervisor vendors to simplify cpu virtualization (multi core?)

-kvm    -builds on the latest generation of open source virtualization.
    -a loadable kernel module that converts the linux kernel into bare metal hypervisor
    -kvm is implemented as a regular linux process managed by host linux kernel.
    -each virtual cpu appears as a regular linux proc.
    -device emulation is handled by qemu that provides emulated bios, pci bus, usb, nic etc.
    -since kvm is a linux proc, it leverages linux security like selinux & sVirt.
    -sVirt project builds on selinux, esp for vm.
    -note that the hypervisor security in vm can lead to cracks. kvm prevents it.
    -a vm is only as secure as its hypervisor.

    -any h/w device supported by linux can be used by kvm.
    -linux enjoys the largest h/w base, lots of drivers, storage etc.
    -kvm supports live migrations of vm with zero downtime for apps.
    -kvm supports a variety of guest os - linux, win, bsd, solaris, dos etc.

Linux HowTo: Ubuntu on Laptop - Some Tips

Now that you considered installing Ubuntu on your laptop, before you proceed, here are some tips you may find useful:


-the best way to test a laptop compatibility with Linux is liveCD.
-drivers included with linux kernel, whether embedded or loaded modules are opensource.
-Ubuntu linux adds to linux kernel by including restricted drivers which are not opensource.
-the availability and updates of restricted drivers is dependent on the h/w manufacturer.
-the most reliable source of h/w docu for linux is tldp.

-hal    - the communication layer bet os and h/w. runs as daemon hald.
-udev    - device manager for conn devices in hal.
-lshal    - shows full list of detected h/w

-kernel modules - external pluggable drivers for kernel.
-/etc/modules    - kernel moldules list
-lsmod        - list current modules
-modprobe    - probe and load modules
-depmode    - shows dependencies of modules
-eg:    # lsmod
    # modprobe usb3945        - usb module probe
    # depmode  usb3945        - show dependencies

-acpi and apm    - two basic power mgmt standards that put linux in control of power mgmt
-apm cmds:   
    # apt-get install apmd
    # apmd                -show apmd stat, req kernel supp
-one of the problems with acpi is different power events configured by diff h/w makers.
-acpi config file:    /etc/default/acpid
-acpi log file     :    /var/log/acpid
-acpi does more than just power mgmt. it can control events like brightness, zoom, n/w events etc.
-to verify, ls /etc/acpi/events
-acpi power state msgs show as 'S' states in /var/log/dmesg
-eg:    # grep S0 /var/log/dmesg        -S0 being the default normal state.

-useful cmds for hardware mgmt - smartctl and hdparm
-smartctl -    disk related info
-hdparm      -    control of disk drives
-lsusb      -    usb drive info
-iwconfig -    wireless device config info