-nfs is protocol that lets computer share files.
-nfs was introduced by Sun in 1984 and freely available to public in v2 in 1989.
-now nfs is an open standard and all Linux/Unix have some nfs implementation.
-nfs is transparent; if the server crashes, no data is lost.
-clients can simply wait resume work when server comes back--as if nothing happened.
-nfsv2 was slow bec the server must commit each modified block before replying to client.
-nfsv2 used udp, nfsv3 gave choice of udp or tcp, nfsv4 is only tcp.
-nfsv2 and v3 are stateless, nfsv4 is stateful.
-stateless - server doesnt keep track of which client has mounted what. This simplifies recoveries after crash.
-stateful - server and client keep track of open files and locks. recovery req both server and client to work together.
-nfsv4 has speed, security and support for other os, internet, acls etc.
-recommended nfs is v4 or v3 at least.
-nfsv2 and v3 have poor security. v4 mandates strong security.
-auth_none=>no security; auth_sys=>unix like /etc/passwd sec; rpcsec_gss=>strong security in v4.
-nfs uses raw uid, gid values to identify users and grant access. v4 does user@domain, group@domain instead.
-nfsv4 uses port 2049 and tcp prot.
-in nfs, user id mapping has nothing to do with user authentication.
-eg: user john with id 1000 on client may be a different named user joe with id 1000 on server.
-this means, files req by john can actually belong to joe.
-traditionally, root access to nfs is limited.
-root uid 0 on client is converted to user nobody on server. This is called 'root-squashing'
-nfs server daemons:
. mountd - to serve mount reqs by clients.
. nfsd - to actually serve data once mounted.
. portmap - to run rpc protocol, the underlying proto behind mountd and nfsd.
-some systems may prefix 'rpc.' to daemons. eg rpc.mountd.
-multiple instances of nfsd may be run to increase file-serve efficiency.
-fs mounted and shared via /etc/exports
-to export files: exportfs -a
-to unshare files: exportfs -u
-not a good idea to share binaries over nfs.
-client cmds:
. showmount -e remoteserver - to see remote mounts
. mount -o <options> remote:/dir mtpt - to mount
-nfs option hard means clients wait indefinitely if server crashes; soft is better.
-nfs stats:
. nfsstat -s - server stats
. nfsstat -c - client stats
-if more than 3% of rpc calls fail, there is a problem in the nfs/network setup.
-automount daemon- mounts nfs when requested, unmounts when not in use.
-runs on client with startup script /etc/init.d/autofs. config file is /etc/auto.master.
-daemon is called automountd and access cmd is automount.
-aside from sharing user data, systems need to share system files like passwd files, hosts files etc.
-tools to help share config data are ldap, active dir (microsoft adoption of ldap), nis.
-nis is old and not recommended over ldap for newer installs.
-nfs was introduced by Sun in 1984 and freely available to public in v2 in 1989.
-now nfs is an open standard and all Linux/Unix have some nfs implementation.
-nfs is transparent; if the server crashes, no data is lost.
-clients can simply wait resume work when server comes back--as if nothing happened.
-nfsv2 was slow bec the server must commit each modified block before replying to client.
-nfsv2 used udp, nfsv3 gave choice of udp or tcp, nfsv4 is only tcp.
-nfsv2 and v3 are stateless, nfsv4 is stateful.
-stateless - server doesnt keep track of which client has mounted what. This simplifies recoveries after crash.
-stateful - server and client keep track of open files and locks. recovery req both server and client to work together.
-nfsv4 has speed, security and support for other os, internet, acls etc.
-recommended nfs is v4 or v3 at least.
-nfsv2 and v3 have poor security. v4 mandates strong security.
-auth_none=>no security; auth_sys=>unix like /etc/passwd sec; rpcsec_gss=>strong security in v4.
-nfs uses raw uid, gid values to identify users and grant access. v4 does user@domain, group@domain instead.
-nfsv4 uses port 2049 and tcp prot.
-in nfs, user id mapping has nothing to do with user authentication.
-eg: user john with id 1000 on client may be a different named user joe with id 1000 on server.
-this means, files req by john can actually belong to joe.
-traditionally, root access to nfs is limited.
-root uid 0 on client is converted to user nobody on server. This is called 'root-squashing'
-nfs server daemons:
. mountd - to serve mount reqs by clients.
. nfsd - to actually serve data once mounted.
. portmap - to run rpc protocol, the underlying proto behind mountd and nfsd.
-some systems may prefix 'rpc.' to daemons. eg rpc.mountd.
-multiple instances of nfsd may be run to increase file-serve efficiency.
-fs mounted and shared via /etc/exports
-to export files: exportfs -a
-to unshare files: exportfs -u
-not a good idea to share binaries over nfs.
-client cmds:
. showmount -e remoteserver - to see remote mounts
. mount -o <options> remote:/dir mtpt - to mount
-nfs option hard means clients wait indefinitely if server crashes; soft is better.
-nfs stats:
. nfsstat -s - server stats
. nfsstat -c - client stats
-if more than 3% of rpc calls fail, there is a problem in the nfs/network setup.
-automount daemon- mounts nfs when requested, unmounts when not in use.
-runs on client with startup script /etc/init.d/autofs. config file is /etc/auto.master.
-daemon is called automountd and access cmd is automount.
-aside from sharing user data, systems need to share system files like passwd files, hosts files etc.
-tools to help share config data are ldap, active dir (microsoft adoption of ldap), nis.
-nis is old and not recommended over ldap for newer installs.
No comments:
Post a Comment