Gallery

Gallery.

Util

Util

Using Scala REPL

It is basically command line interactive shell called as REPL short for Read-Eval-Print-Loop. To start Scala REPL, open command prompt and simply type Scala. After that, you will see new Scala prompt waiting for your input as shown below Now, you can type of any Scala expressions or code in prompt and hit enter and you will get the output immediately as shown below.

Using Scala interpreter to run scala script

You can save your Scala code in the file with .scala extension (basically with any file extension but prefered .scala extension) and to run, provide file name with extension as parameter to Scala interpreter Create the file HelloScala.scala with following code val str = “Hello “+ “Scala “ println(“‘str’ contents : “+str) Now, we can run this file using command shown as follows $> scala HelloScala.scala ‘str’ contents : Hello Scala As you can observe we do not require the any class definition declaration or so. We can put the code inside the file and we are ready to run it.

Using Scala interpreter

Usual Scala program contains lot code chunks spread of across lot of files, for running these programs we need to go through two stages, compile the Scala source code using Scala compiler and run the compiled bytecode using Scala interpreter. Lets create file named Hello.scala with following code object Hello { def main(args:Array[String]):Unit = { println(“Hello, Scala !! “) } } Little explanation about above program, we created object Hello, Object is way scala represent the static members and inside it we have main method taking param as array of strings and returning Unit which is same as void in Java. This main method more like one in java but Scala version of it. compile file using Scala compiler scalac, as shown below, $> scalac Hello.scala It will create the couple of class files in current directory. To run this, we use Scala interpreter (or java interpreter, a little later on this ) by passing the class name (not with .scala or .class extension). In our case, we do following $> scala Hello Hello, Scala !!

Using Java interpreter

As compiled Scala code is bytecode that we can run with Java interpreter which is java.exe or java.sh shipped with standard Java JRE distribution. But for this we need to put additional library in classpath. We just need to add the scala-library.jar which located under $SCALA_HOME/lib To run using java interpreter, we use following command $> java -cp $SCALA_HOME/lib/scala-library.jar;. Hello

Using Scala worksheet

This Scala worksheet is part of Scala IDE for eclipse. It is like the REPL but much more convenient and powerful than REPL. Following is excerpt from official Github repo wiki about the Scala worksheet A worksheet is a Scala file that is evaluated on save, and the result of each expression is shown in a column to the right of your program. Worksheets are like a REPL session on steroids, and enjoy 1st class editor support: completion, hyperlinking, interactive errors-as-you-type, auto-format, etc. For creating new Scala worksheet in Scala IDE, first create Scala project then right click on Scala project and go to following New > Scala WorkSheet It will prompt for name for worksheet and folder to which worksheet to be created. Give any name, accept default folder and then hit enter After that you will get the worksheet as shown as follows. it gives output at right of your code (one marked in red) as shown following figure You can write any code inside object body and hit the save button and you have the output at right of your code

Howto

Bash

Spam Assassin

apache-james roy.james@xemaps.com

http://wiki.apache.org/spamassassin/Rules/SL_HELO_NON_FQDN_1 http://wiki.apache.org/spamassassin/Rules/HELO_LOCALHOST http://wiki.apache.org/spamassassin/Rules/RCVD_NUMERIC_HELO http://wiki.apache.org/spamassassin/Rules/SPF_NEUTRAL

apt-get install spamassassin spamassassin -D < nospam-corporate-umg-1.txt 2> out vi /etc/spamassassin/local.cf

Set the threshold at which a message is considered spam (default: 5.0)

required_score 11.0 score RCVD_IN_XBL 0 0 0 0 vi /etc/default/spamassassin

Change to one to enable spamd

ENABLED=1 tail -f /var/log/syslog create /nonexisting/.spamassassin ??? /etc/init.d/spamassassin start

Benchmark

ab -c 10 -n 100000 http://localhost:8080/app/

Monitoring

nmon

GPG

http://www.apache.org/dist/james/server/james-binary-2.3.2.tar.gz http://www.apache.org/dist/james/server/james-binary-2.3.2.tar.gz.asc http://www.apache.org/dist/james/KEYS And tried verifying the signature for the download using: gpg –import KEYS gpg –verify apache-james-2.3.2.tar.gz.asc gpg: Signature made Tue 11 Aug 2009 08:35:01 NZST using RSA key ID A6EE6908 gpg: Can’t check signature: public key not found This doesn’t look good! Looking through the KEYS file there doesn’t appear to be a key for A6EE6908 Fetching the key from pgpkeys.mit.edu produces the following: gpg –keyserver pgpkeys.mit.edu –recv-key A6EE6908 gpg: requesting key A6EE6908 from hkp server pgpkeys.mit.edu gpg: key A6EE6908: public key “Robert Burrell Donkin (CODE SIGNING KEY) rdonkin@apache.org” imported gpg: no ultimately trusted keys found gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1)

And the fingerprint looks like this: gpg –fingerprint A6EE6908 pub 8192R/A6EE6908 2009-08-07 Key fingerprint = 597C 729B 0237 1932 E77C B9D5 EDB8 C082 A6EE 6908 uid Robert Burrell Donkin (CODE SIGNING KEY) rdonkin@apache.org sub 8192R/B800EFC1 2009-08-07 [dhcp-78-195-249:~/tmp/gora-0.2] mattmann% gpg –import < KEYS gpg: key 3592721E: “Henry Saputra (CODE SIGNING KEY) hsaputra@apache.org” not changed gpg: key B876884A: “Chris Mattmann (CODE SIGNING KEY) mattmann@apache.org” not changed gpg: key C601BCA7: public key “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” imported gpg: Total number processed: 3 gpg: imported: 1 (RSA: 1) gpg: unchanged: 2 [dhcp-78-195-249:~/tmp/gora-0.2] mattmann% $HOME/bin/verify_gpg_sigs Verifying Signature for file gora-0.2-src.tar.gz.asc gpg: Signature made Thu Apr 19 09:04:21 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-0.2-src.zip.asc gpg: Signature made Thu Apr 19 09:04:21 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-accumulo-0.2-src.tar.gz.asc gpg: Signature made Thu Apr 19 09:39:30 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-accumulo-0.2-src.zip.asc gpg: Signature made Thu Apr 19 09:39:30 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-cassandra-0.2-src.tar.gz.asc gpg: Signature made Thu Apr 19 09:40:05 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-cassandra-0.2-src.zip.asc gpg: Signature made Thu Apr 19 09:40:05 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-core-0.2-src.tar.gz.asc gpg: Signature made Thu Apr 19 09:05:59 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-core-0.2-src.zip.asc gpg: Signature made Thu Apr 19 09:05:59 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-hbase-0.2-src.tar.gz.asc gpg: Signature made Thu Apr 19 09:38:51 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-hbase-0.2-src.zip.asc gpg: Signature made Thu Apr 19 09:38:51 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-sql-0.2-src.tar.gz.asc gpg: Signature made Thu Apr 19 09:40:41 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-sql-0.2-src.zip.asc gpg: Signature made Thu Apr 19 09:40:41 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-tutorial-0.2-src.tar.gz.asc gpg: Signature made Thu Apr 19 09:41:16 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Verifying Signature for file gora-tutorial-0.2-src.zip.asc gpg: Signature made Thu Apr 19 09:41:16 2012 PDT using RSA key ID C601BCA7 gpg: Good signature from “Lewis John McGibbney (CODE SIGNING KEY) lewismc@apache.org” gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 2A23 D53F 8D27 5CB6 91E1 89C1 F45E 7970 C601 BCA7 Checksums look good: [dhcp-78-195-249:~/tmp/gora-0.2] mattmann% $HOME/bin/verify_md5_checksums md5sum: stat ‘.bz2’: No such file or directory gora-0.2-src.tar.gz: OK gora-accumulo-0.2-src.tar.gz: OK gora-cassandra-0.2-src.tar.gz: OK gora-core-0.2-src.tar.gz: OK gora-hbase-0.2-src.tar.gz: OK gora-sql-0.2-src.tar.gz: OK gora-tutorial-0.2-src.tar.gz: OK gora-0.2-src.zip: OK gora-accumulo-0.2-src.zip: OK gora-cassandra-0.2-src.zip: OK gora-core-0.2-src.zip: OK gora-hbase-0.2-src.zip: OK gora-sql-0.2-src.zip: OK gora-tutorial-0.2-src.zip: OK [dhcp-78-195-249:~/tmp/gora-0.2] mattmann% curl -O http://people.apache.org/~jghoman/giraph-0.1.0-incubating-rc0/giraph-0.1.0-incubating-src.tar.gz curl -O http://people.apache.org/~jghoman/giraph-0.1.0-incubating-rc0/giraph-0.1.0-incubating-src.tar.gz.asc curl -O http://people.apache.org/~jghoman/giraph-0.1.0-incubating-rc0/giraph-0.1.0-incubating-src.tar.gz.md5 curl -O http://www.apache.org/dist/incubator/giraph/KEYS gpg –import KEYS gpg: key 3D0C92B9: public key “Owen O’Malley (Code signing) omalley@apache.org” imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 2 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 2u $HOME/bin/verify_gpg_sigs Verifying Signature for file giraph-0.1.0-incubating-src.tar.gz.asc gpg: Signature made Tue Jan 31 14:50:26 2012 PST using RSA key ID FCA366B7 gpg: Can’t check signature: No public key gpg –verify giraph-0.1.0-incubating-src.tar.gz.asc giraph-0.1.0-incubating-src.tar.gz $HOME/bin/verify_md5_checksums md5sum: stat ‘.bz2’: No such file or directory md5sum: stat ‘*.zip’: No such file or directory giraph-0.1.0-incubating-src.tar.gz: OK

MD5

http://raamdev.com/2008/howto-install-md5sum-sha1sum-on-mac-os-x/

BITTORRENT

ctorrent -s 1 -e 12 -C 32 -p 400 -u http://www.sumotracker.org/announce file.torrent ctorrent -d -s out-folder -e 12 -C 32 -i 173.12.23.23 -p 6881 file.torrent I needed a command line BitTorrent client for my Fedora Core 6, and started to look for some options, I found ctorrent, which I could see has the options I may need, and as it is written in C should be fast, I know there is another one written in python also. Let’s see how to install and use this one. (ctorrent) First we need to install it. yum install ctorrent (you will need extras repository for this) If you run ctorrent with no arguments this is what you get. CTorrent dnh2 Original code Copyright: YuHong(992126018601033) WARNING: THERE IS NO WARRANTY FOR CTorrent. USE AT YOUR OWN RISK!!! Generic Options: -h/-H Show this message. -x Decode metainfo(torrent) file only, don’t download. -c Check exist only. don’t download. -v Verbose output (for debugging). Download Options: -e int Exit while seed hours later. (default 72 hours) -E num Exit after seeding to ratio (UL:DL). -i ip Listen for connection on ip. (default all ip’s) -p port Listen port. (default 2706 -> 2106) -s save_as Save file/directory/metainfo as… -C cache_size Cache size,unit MB. (default 16MB) -f Force seed mode. skip hash check at startup. -b bf_filename Bit field filename. (use it carefully) -M max_peers Max peers count. -m min_peers Min peers count. -z slice_size Download slice/block size, unit KB. (default 16, max 128). -n file_number Which file download. -D rate Max bandwidth down (unit KB/s) -U rate Max bandwidth up (unit KB/s) -P peer_id Set Peer ID [-CD0201-] -S host:port Use CTCS server Make metainfo(torrent) file Options: -t With make torrent. must specify this option. -u url Tracker’s url. -l piece_len Piece length.(default 262144) eg. hong> ctorrent -s new_filename -e 12 -C 32 -p 6881 eg.torrent home page: http://ctorrent.sourceforge.net/ see also: http://www.rahul.net/dholmes/ctorrent/ bug report: dholmes@ct.boxmail.com original author: bsdi@sina.com

ALT^arrow_left ALT^arrow_right: go to the beginning or the end of the line source file.properties #/!bin/bash username=… mysql -u $username CTRL^R: go back in history CTRL^…: go forward in history

screen -ls // list all the screens screen -S aq // Create a new screen screen -r aq // Join an existing screen screen -D -r ‘1234.somescreensession’

dmesg

  1. Download Ubuntu Desktop
  2. Open the Terminal (in /Applications/Utilities/ or query Terminal in Spotlight).
  3. Convert the .iso file to .img using the convert option of hdiutil (e.g.,hdiutil convert -format UDRW -o ~/path/to/target.img ~/path/to/ubuntu.iso) Note: OS X tends to put the .dmg ending on the output file automatically.
  4. Run diskutil list to get the current list of devices.
  5. Insert your flash media.
  6. Run diskutil list again and determine the device node assigned to your flash media (e.g. /dev/disk2).
  7. Run diskutil unmountDisk /dev/diskN (replace N with the disk number from the last command; in the previous example, N would be 2).
  8. Execute sudo dd if=/path/to/downloaded.img of=/dev/rdiskN bs=1m (replace /path/to/downloaded.img with the path where the image file is located; for example, ./ubuntu.imgor ./ubuntu.dmg). Using /dev/rdisk instead of /dev/disk may be faster If you see the error dd: Invalid number ‘1m’, you are using GNU dd. Use the same command but replace bs=1m with bs=1M If you see the error dd: /dev/diskN: Resource busy, make sure the disk is not in use. Start the ‘Disk Utility.app’ and unmount (don’t eject) the drive
  9. Run diskutil eject /dev/diskN and remove your flash media when the command completes.
  10. Restart your Mac and press alt/option key while the Mac is restarting to choose the USB stick.

sort uniq wc wc -l ls -lh list=*.csv for file in $list do cat $file » new_file.csv cat -vet done $table=yourtable hive -e “load data local inpath ‘$file’ into table $table” cat *.csv > output.csv netstat -npl netstat -nr netstat -a -t –numeric-ports -p sockstat -l | grep sshd jflex flex lex chmod -R 755 . # default permission tty script -a /dev/pts/1 xmllint

$ cat /proc/meminfo $ less /proc/meminfo $ more /proc/meminfo $ egrep –color ‘Mem|Cache|Swap’ /proc/meminfo Sample outputs: MemTotal: 8120568 kB MemFree: 2298932 kB Cached: 1907240 kB SwapCached: 0 kB SwapTotal: 15859708 kB SwapFree: 15859708 kB $ free -m

command tee file

w3m

lspci lsusb dmesg |grep eth0

more /etc/fstab fdisk -l du -hs /path/to/directory | sort df -h Usually I will put -h to make it size human readable. Another good tools to check the disk space for directories, we use du. You may realized that when you type ls -l, the size of every directories have the same size 4096, it is because directories is actually a file. But for us, we want to know how large the load it store instead the directory file itself. To show all directories size including sub directories, type du -h To calculate the current directory size you are in (-s stand for summary) du -sh To show all the 1 level sub directories size (which you are not interested at sub sub directories.) du -sh * To show the size of specific directory du -sh /home To show the size of all sub directories of a specific directory du -sh /home/*

Kernel

mtr dig +trace hostname traceroute

file descriptor output types (stdout1 2 and stderr3) strace echo “1”

/dev/null 1>&2

time smtp-source -A -C1500 -l 100 -m 100000 -s 500 -d -c -f nm@test.de -t te 213.157.22.218:25 time smtp-source -L -s 40 -m 100 -l 4096 -d -c -f me@elasticinbox.com -t test@elasticinbox.com ElasticInbox-LB-1070648408.eu-west-1.elb.amazonaws.com:2400 for i in seq -w 1 1000; do lsof -a -u dweiss -c java > snap.$i; sleep 5; done find queue-jms/src/test/ -name .java -print | xargs sed -i ‘s/\t/ /g’ find /tmp/ -name ‘aos-bu-’ -print0 | xargs -0 rm -fr tr ‘A-Z’ ‘a-z’ < subtitles_124.txt | tr -sc ‘A-Za-z’ ‘\n’ | sort | less | uniq -c | sort -n -r | less tr “;” “,” < in.csv | tr “"” “” > out.csv echo $? tar xvfj *.bz2 tar xvfz .tar.gz

locate file

bzcat stackoverflow.com-Posts.7z hdfs dfs -put - /user/srowen/Posts.xml

patch -p0 –dry-run < file.patch

ubuntu startup scripts vi /etc/init.d

fedora startup scripts have a fedora core box which needs to run different scripts on startup to connect to other boxes on the network. After a bit of fiddling around, I found what appears to be the best solution for me, using ntsysv and init.d. Here’s how it’s done;

Simple Commands Complex Commands The For Structure Example For Syntax The While Structure Example While Syntax The If Structure Example Simple If Syntax Example Complex If Syntax The Case Structure Example Case Syntax The Parent & Sub-Shell Structure The Function Structure Example Function Syntax Special Commands Comment Structure Built-In Commands (Simple, Complex & Special Commands) Back in the man pages the next section is called USAGE and goes on to talk about pipelines and lists. Most of what it says here can be understood by any UNIX user so I will skip this for now but there will be some examples later showing various implementations of these definitions. The issue I want to deal with next is the simple, complex and special commands. This is nowhere near as bad as it sounds. Simple Commands Simple commands are just straight UNIX commands that exist regardless of the surrounding shell environment. Like our old favourites ls -l or df -al or lpr -Pprinter filename. There are large numbers of commands that fall into this category but the following list is a selection of the more useful when scripting. sort Sorts lines in ascending, descending and unique order grep Searches for regular expressions in strings or files basename Strips the path from a path string to leave just the filename dirname Removes the file from a path string to leave just the pathname cut Chops up a text string by characters or fields wc Count the characters, words, or lines [ (test) ] Predicate or conditional processor tr ‘a’ ‘b’ Transform characters expr Simple arithmetic processor bc Basic Calculator eval Evaluate variables echo Output strings date Create date strings nawk Manipulate text strings head | tail Access lines in files Some of the above commands can be very complex indeed, especially when assembled into pipelines and lists. However, these are still referred to as simple commands - presumably because they stand alone. Take a close look at the man pages for all of the above commands, you will find them invaluable during your scripting sojourn. Complex Commands Complex commands are just the shells internal commands which are used to group simple commands into controlled sets based on your requirements. These include the loop constructs and conditional test structures. These cannot stand alone. An if requires a then and a fi at the very least. Lets take a look at the man pages again at this point. The for structure: It says on my systems man page for name [ in word … ] do list done as a syntax description of the for command construct. Well, it is correct but does not really show the layout of the command at all. Look at the example below and you can see straight away what is supposed to happen. Example for syntax alphabet=”a b c d e” # Initialise a string count=0 # Initialise a counter for letter in $alphabet # Set up a loop control do # Begin the loop count=expr $count + 1 # Increment the counter echo “Letter $count is [$letter]” # Display the result done # End of loop So in plain English, for each letter found in alphabet loop between do and done and process the list of commands found. Lets take this one line at a time from the top. This is the way the sh likes to have its variables set. There is no leading word as in the csh (set) just start with the variable name. There are also no blanks either side of the equal sign. Indeed, if you put a blank in, the shell will give you an error message for your trouble. This also gives rise to the difference between the top two lines in this example. Because I want to include spaces in my string for alphabet, I must enclose the whole string in double quotes. On the next line this is not required as there are no embedded blanks in the value of count. When setting variables, no blanks are allowed. Everywhere else, sh loves blanks. In line 3 the for statement creates a loop construct by selecting the next letter from alphabet each time through the loop and executing the list found between the do and the done for each letter. This process also strips away any blanks (before and after) each letter found in alphabet . The do and done statements are not executed as such, they simply mark the beginning and end of the loop list. They are however a matched pair, leave one out and the shell will complain. Inside the loop are two simple commands (apparently!). The first one just increments the loop counter by adding one to its current value. Note the use of the back-quote here to force the execution of the expr command before setting the new value of count. There will be more about this later. The next line is something we have seen before, just a display command showing the values of the variables. Note the use of the $ symbol to request the value of the variables. The while structure: There is another similarly structured command in the sh called while. Its syntax structure is listed as while list do list done which you should now be able to translate yourself into something that looks like the example below. Example while syntax alphabet=”a b c d e” # Initialise a string count=0 # Initialise a counter while [ $count -lt 5 ] # Set up a loop control do # Begin the loop count=expr $count + 1 # Increment the counter position=bc $count + $count - 1 # Position of next letter letter=echo "$alphabet" | cut -c$position-$position # Get next letter echo “Letter $count is [$letter]” # Display the result done # End of loop Most of this is the same construct, I have just replaced the for loop set-up with its equivalent while syntax. Instead of stepping through the letters in alphabet, the loop control now monitors the size of the count with [ $count -lt 5]. The -lt flag here represents less-than and is part of the UNIX test command, which is implied by the square brackets. Any other command, list or variable could be put here as long as its substituted value equates to an integer. A zero value will exit the loop, anything else and the loop will continue to process. From the above you can work out that test returns 1 for true and 0 for false. Have a look at the man pages for test at this point, you will find it a very useful command with great flexibility. The if structure: Next in complexity is if list then list [ elif list then list ] … [ else list ] fi, or the if construct. What does that lot mean? Well usually if statements in any language are associated with predication and so as you would expect there is some more implied use of the UNIX test command. Lets generate an example to see the structure in a more usual form. The square brackets in the echo statement have no relevance other than to clarify the output when executed (See - Debugging). However, the square brackets in the if and elif lines are mandatory to the structure. Example simple if syntax if [ -f $dirname/$filename ] then echo “This filename [$filename] exists” elif [ -d $dirname ] then echo “This dirname [$dirname] exists” else echo “Neither [$dirname] or [$filename] exist” fi You can see here more examples of what test can do. The -f flag tests for existence of a plain file, while -d tests for existence of a directory. There is no limit (that I can discover) to the number of elif’s you can use in one if statement. You can also stack up the tests into a list using a double pipe or double ampersand as in Example complex if syntax below. Here the use of the double pipe (||) is the syntax for a logical or whereas the double ampersand (&&) is the logical and. Example complex if syntax if [ -f $dir/$file ] || [ -f $dir/$newfile ] then echo “Either this filename [$file] exists” echo “Or this filename [$newfile] exists” elif [ -d $dir ] then echo “This dirname [$dir] exists” else echo “Neither [$dir] or [$file or $newfile] exist” fi In the sh if construct it is important to put the then word on its own line or sh will complain about an invalid test. Also important is the blank inside each end of the test. Without this the test will generate a syntax error - usually “test expected!” which is a bit meaningless. case structure: Next is the case word in [ pattern [ pattern ] … ) list ;; ] esac which is probably the most complicated construct to decode from the simple syntax listed above. It is a bit like a multi-line if statement linked with logical or symbols (||). It is commonly used to process a list of parameters passed into a script as arguments when the actual parameters could be in any order or of any value. The layout is shown in 8.2.4.1 below, which is a section from a print script. Example case syntax size=0 # Default Char Point Size (!) page=660 # Default Page Point Size while [ “$1” != “” ] # When there are arguments… do # Process the next one case $1 # Look at $1 in -l) lines=47; # If it’s a “-l”, set lines page=470; # Set the Landscape Page Point options=”$options -L -l”; # Set the Landscape Options shift;; # Shift one argument along -p) lines=66; # If it’s a “-p”, set lines options=”$options -l”; # Set the Portrait Options shift;; # Shift one argument along -s) size=$2; # If it’s a “-s”, set size shift 2;; # Shift two arguments along ) echo “Option [$1] not one of [p, l, s]”; # Error (!) exit;; # Abort Script Now esac if [ $size = 0 ] # If size still un-set… then size=echo "$page / $lines" | bc # Set from pages over lines else # or lines=echo "$page / $size" | bc # Set lines fi done options=”$options$lines -s$size” # Build complete option list lp -P$PRINTER $options $filename # Output print file to printer Here we see a while loop, exiting when no more parameters are found on input line, enclosing a case statement. The case statement repeatedly tests $1 against a list of possible matches indicated by the right parentheses. The star () at the end is the default case and will match anything left over. When a match is found, the list of commands following the right parentheses are executed up to the double semi-colon. In each of these lists, there is a shift statement which shifts the input parameters one place left (so $2 becomes $1 etc.), allowing the next parameter to be tested on the next pass through the loop. In the case of the “-s” parameter, an extra following argument is expected, the size value, which is why the shift instruction contains the additional argument 2 (shifting the parameters 2 spaces left). This effectively allows the processing of all the passed arguments in any order and includes an exit for an invalid parameter condition via the star match. The if statement at the end checks if the size parameter has been set then uses the bc command to set either size or lines accordingly. When complete, the final options are created and passed to the lp command to print the file. The parent and sub-shell structure: Then there are two easy ones the ( list ) and { list; } constructs which simply execute the whole list of commands in a separate sub-shell ( ) or in the parent shell { } with a note that the blanks between the { } are mandatory. The function structure: Lastly in the complex command section we come to what is probably the most underused but most useful construct for serious scripters. The function definition. The syntax is deceptively simple which I guess is what leads most users to assume it’s not worth learning about. How wrong they are. Just take a look at the example below to see what I mean. Example function syntax i_upper_case() { echo $1 | tr ‘abcdefghijklmnopqrstuvwxyz’ \ ‘ABCDEFGHIJKLMNOPQRSTUVWXYZ’ } This is a very simple function called i_upper_case , you can probably guess what it does. The backslash at the end of the echo line is a UNIX feature that allows a command line to be continued on the next line. It tells the system to ignor the next character - in this case the newline. Note that it gets its input argument from a passed parameter ($1). So to make use of this function within a script you simply need to call it with an argument as follows: i_upper_case “fred” or name=”fred” i_upper_case $name And you will get back FRED in either case. A more appropriate usage would be something like: small_name=”$input_argument” large_name=i_upper_case "$small_name" echo “Large Name = [$large_name]” Which allows the case to be changed and put into a new variable. The advantage of doing this at all is that you don’t have to re-code the same thing over again when you want to use the feature several times within the script. Note the use here of the double quotes around the variables to the right of the equal signs - this is to preserve any blanks within the strings which would otherwise be treated as argument separators and hence the function would only process the first argument in the list. What this means is: small_name=”fred smith” large_name=i_upper_case "$small_name" # Quoted parameter echo “Large Name = [$large_name]” Will display FRED SMITH, whereas: small_name=”fred smith” large_name=i_upper_case $small_name # Unquoted parameter echo “Large Name = [$large_name]” Will display FRED only. This bug can be traced back to the function definition which only reads in the $1 parameter. Changing this to read the $@ parameter would correct the bug for this function. But beware, this type of fix would not be appropriate in all situations. Try and think generically when creating functions and make them as useful as possible in all scenarios. There are two very basic rules to remember when dealing with functions: You cannot use a function until it is defined. Thus all function definitions should appear either at the top of the script or in a start-up file such as ~/.profile. Functions can be nested to any depth, as long as the first rule is not violated. At the end of the complex command section there is a reminder message that all of the keywords used in these complex commands are reserved words and not therefore available as variable names. This means that you can screw up any UNIX command by using it as a variable but you cannot screw up a complex shell reserved word. echo() { /usr/bin/user/my_echo “$@” } Is perfectly okay as a function definition and the sh will happily use your echo function whenever an echo command is required within the script body. while() { /usr/bin/user/my_while “$@” } Is not okay and the function definition will fail at runtime. Special Commands: The following are a set of special commands which the shell provides as stand alone statements. Input and output redirection is permitted for all these commands unlike the complex commands. You cannot redirect the output from a while loop construct, only the simple or special commands used within the loop list. The colon ( : ) does nothing! A zero exit code is returned. Can be used to stand in for a command but I must admit not to finding a real use for this command. The dot ( . filename) reads in commands from another file (See Startup Files & Environment for details). If the filename following the dot is not in the current working directory, then the shell searches along the PATH variable looking for a match. The first match that is found is the file that is used. The file is read into the shell and the commands found are executed within the current environment. The break ( break [ n ] ) command causes an exit from inside a for or while loop. The optional n indicates the number of levels to break out from - the default is one level. Although not stated in the syntax rules, I have used this statement in an if then else fi construct to good effect in Simple Utility Functions where it causes an exit from the function but does not cause an exit from the calling script. The continue ( continue [ n ] ) command resumes the next iteration of the enclosing for or while loop at the [ optional nth ] enclosing loop. Can’t say I’ve used this one either. The cd ( cd [ argument ] ) command is the the change directory command for the shell. The directory is specified with argument which defaults to HOME. The environment variable CDPATH is used as a search path for directories specified by argument. The echo ( echo [ argument ] ) command is the shell output statement. See the man pages for echo(1) for full details. The eval ( eval [ argument ] ) command reads the arguments into the shell and then attempts to execute the resulting command. This allows pre-emptive parameter substitution of hidden parameters or commands. The exec ( exec [ argument ] ) command reads in the command specified by arguments and executes them in place of this shell without creating a new process. Input an output arguments may appear and, if no others are given, will cause the shell input and or output to be modified. The exit ( exit [ n ] ) command causes a shell to exit with the exit status specified by the n parameter. If the n parameter is omitted, the exit status is that of the last executed command within the shell. The export ( export [ variable ] ) command we have already met and is the command which makes shell variables global in scope. Without a variable, export will list currently exported variables. The getopts command is provided to support command syntax standards - see getopts(1) and intro(1) man pages for details. The hash ( hash [ -r ] [ name ] ) command remembers the location in the search path (PATH variable) of the command name. The option -r causes the shell to forget the location of name. With no options the command will list out details about current remembered commands. This has the effect of speeding up access to some commands. The newgrp ( newgrp [ argument ] ) command is equivalent to exec newgrp argument. See newgrp(1M) for usage and description. The newgrp command logs a user into a new group by changing a user’s real and effective group ID. The user remains logged in and the current directory is unchanged. The execution of newgrp always replaces the current shell with a new shell, even if the command terminates with an error (unknown group). The pwd ( pwd ) command literally prints the current working directory. Usually used to set the CWD variable internally. The read ( read name ) command will be seen in several examples. It allows the shell to pause and request user input for the variable name, which is then accepted as the variables value. The readonly ( readonly [ name ] ) command sets a variable as imutable. Once named in this command they cannot be reassigned new values. The return ( return [ n ] ) command causes a function to exit with the return value n. If the n is omitted, the return value is the exit status of the last command executed within the function. Unlike exit this does not result in termination of the calling script. The shift ( shift [ n ] ) command causes the positional parameters to be moved to the left ($2 becomes $1, etc.) by the value of n, which defaults to one. The test command is used to evaluate conditional expressions. See the man pages for test(1) for full details and usages. The times command prints the accumulated user and system times for processes run from the shell. The trap ( trap [ argument ] [ n ] ) command allows conditional execution of the commands contained within argument dependant on the shell receiving numeric or symbolic signal(s) n. The type ( type [ name ] ) command indicates how name would be interpreted if used as a command name. The ulimit and umask commands exist in their own right as UNIX commands. See man pages. The unset ( unset [ name ] ) command allows names to be unset. This removes the values from the variable or function. The names PATH, PS1, PS2, MAILCHECK, and IFS cannot be unset. The wait ( wait [ n ] ) command waits for the background process n to terminate and report its termination status; where n is the process id. With no arguments, all current background processes are waited for. Most of these special commands get used somewhere in this book and more detailed explanations will follow at that time. Comment structure: The next thing on my systems man page is a reference to the hash (#) comment character. It states that any word beginning with # causes that word and all the following characters up to a newline to be ignored. There are no notes about the first line exceptions that I gave in The Basic Shells when we were dealing with shell indicators (The #! sequence) Home Next Preface Introduction Basic Shells Shell Syntax Built-In Commands Command Substitution Startup & Environment Pipes, Lists & Redirection Input & Output Using Files Design Considerations Functions Debugging Putting It All Together Appendix Code Examples Page 205 This page was brought to you by rhreepe@injunea.demon.co.uk

GUAVA

MYSQL

/etc/init/mysql.conf: exec /usr/sbin/mysqld –skip-grant-tables Access denied for user ‘root’@’localhost’ (using password: NO) Who is fault? No matter. Probably, because of my root user doesn’t have password. What to do? Finally i did next:

  1. Stopped mysql server: i simply found mysqld process in windows task manager and stopped it.
  2. Created init.txt file with next content: UPDATE mysql.user SET Password=PASSWORD(’mypassword’) WHERE User=’root’; FLUSH PRIVILEGES; grant all privileges on . to root@localhost identified by ‘mypassword’ with grant option; grant all privileges on mydatabase.* to root@localhost identified by ‘mypassword’ with grant option;
  3. Run mysql server from command line as: mysqld –init-file=F:\mysql\bin\init.txt show processlist

IRC

http://webchat.freenode.net/

(/connect freenode) /server irc.freenode.net /join #james

IRC Information…..

IRC Class - Basic IRC Commands

IRC - Internet Relay Chat
Helpful Tips
Basic IRC Commands 



mIRC Setup Tutorial
PIRCH Setup Tutorial 

Just as you are able to surf the net with a few tricks to help make things easier, IRC is very similar. Below you will find some of the more common IRC commands that we use often. For a far more complete list, please visit our mIRC Commands page.

/join Type /join #channelname – to join a channel of your choice Example: /join #bossmom What it looks like:

[18:44] *** Now talking in #beginner 
--Op-- bossmom has joined the channel 
[18:44] *** Topic is 'Beginner's Help/Chat Channel....All Are Welcome Here!! ®© [ENGLISH]' 
[18:44] *** Set by X on Sun Jul 23 16:10:34

/me The /me is an action message. Type /me ‘does anything’ Example: /me waves hello What it looks like: * bossmom waves hello

/msg Type /msg nickname (message) to start a private chat. Example: /msg puddytat Hey tat, how are you? What it looks like: -> puddytat Hey tat, how are you?

/nick /nick changes your nickname Example: type /nick newnickname (limit 9 characters) What it looks like: I typed /nick luv2quilt *** bossmom is now known as luv2quilt

/notice A notice is used to send a short message to another person without opening up a private window. Type /notice nickname (message) Example: /notice badnick Please change your nickname for this family channel. What it looks like: -> -badnick- Please change your nickname for this family channel. /part Type /part – to leave one channel Type /partall – to leave all the channels you are in

/ping Type /ping nickname. What this command does is give you the ping time, or lag time, between you and the person you pinged. Lag can be explained as the amount of time it takes for you to type your message and for others to read your messages. Unfortunately, lag is always a part of IRC, although most times it’s not a problem, just a nuisance. Example: /ping luv2quilt What it looks like: [19:04] -> [luv2quilt] PING [19:04] [luv2quilt PING reply]: 0secs

/query Similar to the /msg, except it forces a window to pop open. Type /query nickname (message) Example: /query Sofaspud^ Sooo….what’s new? What it looks like: soooo....what's new?

/quit Type /quit to leave IRC altogether. This disconnects mirc from the server. Example: /quit Going out for dinner…nite all What it looks like: *** Quits: saca (Leaving)

/ignore Unfortunately, there will be times when you don’t want to talk to someone, or else someone may be harassing you. By typing /ignore nickname 3, you will not receive anymore messages from that person. Example: /ignore luv2quilt 3 To Unignore them, type /ignore -r luv2quilt 3 What it looks like: ** Added *!bossmom@.dialup.netins.net to ignore list ** Removed !bossmom@*.dialup.netins.net from ignore list

/whois Type /whois nickname to see a bit more information about another user. You’ll see what server another person is using, or what their ISP is. Pretty helpful when you don’t recognize a nickname that wants to chat. You may recognize the IP, (Internet Protocol) and then feel more comfortable carrying on a conversation. You’ll also be able to see what other channels a person is in, which might be a good indicator if you really want to talk with them or not. Example: /whois bossmom What it looks like: luv2quilt is bossmom@elwo-01-094.dialup.netins.net * Enjoy the Journey…….. luv2quilt on @#bossmom luv2quilt using Seattle.WA.US.Undernet.org the time for school is during a recession. luv2quilt has been idle 18secs, signed on Sun Jul 23 18:47:26 luv2quilt End of /WHOIS list.

/chat This opens up a DCC/CHAT window to another user. What’s nice about these is that you can continue to chat even if you get disconnected from your server. Word of Caution: Do NOT accept dcc/chats nor dcc/gets from anyone that you don’t know. Type /chat nickname. Example: /chat oddjob^ What it looks like: Chat with oddjob^ Waiting for acknowledgement…

/help There’s one more very helpful command, and probably the one you’ll use a lot when first starting out. In fact, I still use it quite a lot, and that’s the built-in help menu of mIRC. Type /help, you’ll see the the mIRC Help Menu open up. You can do a search from there, or you can type /help topic. Either way, a TON of information at your fingertips. Example: /help Basic IRC Commands

You are doing great so far. If you haven’t yet read some Basic IRC Tips, I’d encourage you to take a peek, otherwise we are ready to setup your IRC client. Please choose one of the following clients you would like to learn:

mIRC Setup Tutorial
PIRCH Setup Tutorial 

Let’s move on with the next step – getting online with IRC :)

MAC OSX

Network

sudo scutil –set HostName eric

Spotlight

sudo mdutil -a -i off sudo su chmod 0000 /Library/Spotlight chmod 0000 /System/Library/Spotlight chmod 0000 /System/Library/CoreServices/Search.bundle chmod 0000 /System/Library/PreferencePanes/Spotlight.prefPane chmod 0000 /System/Library/Services/Spotlight.service chmod 0000 /System/Library/Contextual Menu Items/SpotlightCM.plugin chmod 0000 /System/Library/StartupItems/Metadata chmod 0000 /usr/bin/mdimport chmod 0000 /usr/bin/mdcheckschema chmod 0000 /usr/bin/mdfind chmod 0000 /usr/bin/mdls chmod 0000 /usr/bin/mdutil chmod 0000 /usr/bin/md After a reboot, open a new Terminal and do sudo su to make a root shell, then: rm -r /.Spotlight-V100 rm -r /private/var/tmp/mds exit sudo mdutil -E / ————– /System/Library/Frameworks/ScreenSaver.framework/Versions/A/Resources/ScreenSaverEngine.app ————– SCREEN RECORD ————– … —————– Screen Capture #screenshot #printscreen —————– Switch to the screen that you wan to to do screen capture Hold down Apple key ⌘ + Shift + 3 and release all then use your mouse to click on the screen Done. You will see a picture file in at your desktop. That’s the screen capture picture. You can also do a screen capture for a portion of your screen. Switch to the screen that you wan to to do screen capture Hold down Apple key ⌘ + Shift + 4 and release all key

ANDROID

mkdir android ; cd android ; repo init -u git://android.git.kernel.org/platform/manifest.git ; repo sync ; make”

CHROMIUM

javascript:(function(){ window.location.href=’url1’; window.open(‘url2’);})();

GRAPHITE

echo “gaugor:333|g” | nc -u graphite.qutics.com 8125 https://github.com/etsy/statsd

sudo apt-get install apache2

sudo mkdir /vol sudo mount /dev/xvdf /vol sudo cp /vol/000-default /etc/apache2/sites-enabled/ cat /vol/hosts sudo vi /etc/hosts — 10.47.144.106 echarles.net www.echarles.net blog.echarles.net edmond.echarles.net eleonore.echarles.net 10.47.144.106 ibayart.com www.ibayart.com blog.ibayart.com 10.47.144.106 u-mangate.com www.u-mangate.com blog.u-mangate.com 10.47.144.106 datalayer.io www.datalayer.io blog.datalayer.io 10.47.144.106 datashield.io www.datashield.io blog.datashield.io 10.47.144.106 datalayer.io www.datalayer.io blog.datalayer.io 10.47.144.106 datalayer.be www.datalayer.be blog.datalayer.be 10.47.144.106 place.io www.place.io blog.place.io 10.47.144.106 tipi.io www.tipi.io blog.tipi.io 10.47.144.106 placestory.com www.placestory.com blog.placestory.com 10.47.144.106 socialitude.com www.socialitude.com blog.socialitude.com

10.47.144.106 www.cib-bic.be www.cib-sa.be 10.47.144.106 www.credit-regional-wallon.be — vi /root/.bashrc source /vol/.bash_profile — cd /vol ls lost+found df Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 8256952 1298868 6874200 16% / tmpfs 3826296 0 3826296 0% /dev/shm /dev/xvdf 25803068 176196 24316152 1% /vol

If you wish this device to mount automatically when you reboot the server make sure you add this to your /etc/fstab file. /dev/xvdf /vol/ ext3 noatime,nodiratime 0 0

more /etc/fstab

LABEL=cloudimg-rootfs / ext4 defaults 0 0 /dev/xvdf /vol auto noatime 0 0

cd / rm -fr /opt # if needed… sudo ln -s /vol opt sudo ln -s /opt/env a cd /var sudo ln -s /opt/var-data data cd ln -s /opt/env/dot-aos .aos ln -s /opt/env/bash_profile .bash_profile placestory-store-reset-restart.sh jps

ssh-keygen -t rsa -P “” cat ~/.ssh/id_rsa.pub » ~/.ssh/authorized_keys ssh localhost

S3

http://www.slideshare.net/echarles/savedfiles?s_title=storm-distributed-and-faulttolerant-realtime-computation&user_login=nathanmarz http://s3.amazonaws.com/ppt-download/storm-strange-loop-110920101342-phpapp01.pdf?response-content-disposition=attachment&Signature=1jx8dEs5XsAUwVzFuxAbcR8Uqq8%3D&Expires=1354693255&AWSAccessKeyId=AKIAIW74DRRRQSO4NIKA


_ |_ __||| | -| | | | |__|__||__||_|

#ascii

Character Hex Value Decimal Value Symbol NewLine - - WhiteSPace - - KanjiSPace (WideSpace) - - NULL 00 0 StartOfHeading 01 1 StartofTeXt 02 2 EndofTeXt 03 3 EndOfTrans. 04 4 ENQuiry 05 5 ACKnowlege 06 6 BELL 07 7 BackSpace 08 8 HorizTab 09 9 LineFeed 0A 10 VerticalTab 0B 11 FormFeed 0C 12 CarriageReturn 0D 13 ShiftOut 0E 14 ShiftIn 0F 15 DataLinkEscape 10 16 DeviceControl1 11 17 DeviceControl2 12 18 DeviceControl3 13 19 DeviceControl4 14 20 NegativeAcK 15 21 SYNchron.Idle 16 22 EndTransBlock 17 23 CANcel 18 24 EndofMedium 19 25 SUBstitute 1A 26 ESCape 1B 27 FileSeparator 1C 28 GroupSeparator 1D 29 RecordSep. 1E 30 UnitSeparator 1F 31 SPace 20 32 ! 21 33 - " 22 34 -

23 35 -

$ 24 36 - % 25 37 - & 26 38 - ‘ 27 39 - ( 28 40 - ) 29 41 -

container

brick

inject

ui

web

bridge

remote

io

mme

parser a.k.a ast

coordination

processing

metadata

data

security

mgt

msc

client-desktop

client-mobile

lng

os

devops [sp]

devops-software-management

devops-software-repository

devops-software-integration [ci]

devops-software-monitoring [sm]

devops-software-supervision

devops-virtual-cloud

devops-virtual-cloud-private

curl -XGET ‘http://localhost:9200/aos-microfacet-post/_mapping?pretty’

Android

emulator -avd aos adb logcat build-deploy.sh

ANDROID SDK

android

android avd

emulator -avd aos adb logcat adb -s emulator-5554 logcat adb -s 47901edd098330f4 logcat

android list targets id: 25 or “android-18” Name: Android 4.3 Type: Platform API level: 18 Revision: 1 Skins: WQVGA400, WQVGA432, WVGA800 (default), WXGA800-7in, WXGA800, HVGA, WVGA854, WSVGA, WXGA720, QVGA ABIs : armeabi-v7a

android create avd -n -t [-

launch ‘android avd’, select and edit the ‘aos’ virtual device, and add the hardware properties

android list avd Name: aos Path: /home/eric/.android/avd/aos.avd Target: Android 4.3 (API level 18) ABI: armeabi-v7a Skin: WVGA800 Sdcard: 64M

emulator -avd aos

adb logcat

screen capture

adb devices -l

List of devices attached 47901edd098330f4 device usb:3-2 emulator-5554 device product:sdk model:sdk device:generic

adb [-d -e -s ]

#ant debug

adb uninstall [-k] aos.mobile… (‘-k’ means keep the data and cache directories) adb uninstall aos.mobile… adb install target/aos-…-0.0.1-SNAPSHOT.apk adb install -r target/aos-…-0.0.1-SNAPSHOT.apk adb shell am start -n aos…./.MyActivity

running a maven from eclipse, setup java build path

1.

change screen orientation (portrait<->landscape)

adb shell setprop debug.checkjni 1

Installation error: INSTALL_FAILED_VERSION_DOWNGRADE Can’t install update of com.example.android.apis update version 0 is older than installed version 18

Network Address Description 10.0.2.1 Router/gateway address 10.0.2.2 Special alias to your host loopback interface (i.e., 127.0.0.1 on your development machine) 10.0.2.3 First DNS server 10.0.2.4 / 10.0.2.5 / 10.0.2.6 Optional second, third and fourth DNS server (if any) 10.0.2.15 The emulated device’s own network/ethernet interface 127.0.0.1 The emulated device’s own loopback interface

sign in release mode $ keytool -genkey -v -keystore app.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000 $ jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore app.apk alias_name $ jarsigner -verify app.apk $ jarsigner -verify -verbose -certs app.apk $ zipalign -v 4 your_project_name-unaligned.apk your_project_name.apk

Cordova

cordova create hello com.example.hello HelloWorld

cd hello cordova platform add ios cordova platform add android cordova platform add blackberry10 cordova platforms ls cordova platform rm blackberry10 cordova platform rm android

cordova build

cordova prepare ios cordova compile ios

cordova emulate android

cordova run android

Apache Cordova Hello World Application

Run Application

Run Tests

The Java files in this directory are compiled into the JAR file

java-firefox-extension/tools/firefoxClassLoader.jar

If you need to modify them, you can simply recompile and repackage. (If you are using Eclipse, simply add java-firefox-extension/tools/class-loader as a source folder in

T4F Essentials JS NodeJs

NodeJs usage examples.

T4F CLOJURE

If not done by m2eclipse, add natures and buildcommands to eclipse .project file

ccw.leiningen.nature ccw.nature ccw.builder ccw.leiningen.builder clj lein clean lein compile lein install # !!! can override pom.xml, so rewrite with pom.xml_bu lein run (load "io.datalayer/clj/hello") (ns io.datalayer.clj.hello) (hello "Clojure") Create and run a simple Clojure project ("Hello Betty") Open the Java perspective Window > Open Perspective > Java (a perspective is a predefined layout of views, suitable for a particular type of development) Create a Clojure project With Leiningen Project Wizard: File > New > Leiningen Project, name it myproject The project is created using the "default" Leiningen Template, which creates a Clojure project with a predefined "myproject.core" namespace in src/myproject/core.clj Add a function definition to myproject.core: Open src/main/clojure/t4fclojure.clj, add the following at the end: (defn hello [who] (str "Hello " who " !")), save the file Run the project: With file src/main/clojure/t4fclojure.clj open, Hit Ctrl+Alt+S (Cmd+Alt+S on MacOS). This sends the whole file's code to the REPL (and also takes care of starting a REPL for the project if none is currently started) Switch to the REPL in the namespace of your file: Hit Ctrl+Alt+N (Cmd+Alt+N on MacOS). Alternatively, just click on the bottom of the REPL inside the "text input area" Call your function (Hit Enter to send the expression if the cursor is at the end, or hit Ctrl+Enter / Cmd+Enter if the cursor is not at the end of the text): > (hello "Clojure") [Ctrl+Enter] > "Hello Clojure !" ____ _ _ \ ___ ___| |_ ___ ___ |_|___ | | . | _| '_| -_| _|_| | . | |____/|___|___|_,_|___|_| |_|_|___| #docker.io COMMANDS | docker [OPTIONS] COMMAND [arg...] -H=[unix:///var/run/docker.sock]: tcp://host:port to bind/connect to or unix://path/to/socket to use A self-sufficient runtime for linux containers. Commands: attach Attach to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders from the containers filesystem to the host path diff Inspect changes on a container's filesystem events Get real time events from the server export Stream the contents of a container as a tar archive history Show the history of an image images List images import Create a new filesystem image from the contents of a tarball info Display system-wide information inspect Return low-level information on a container kill Kill a running container load Load an image from a tar archive login Register or Login to the docker registry server logs Fetch the logs of a container port Lookup the public-facing port which is NAT-ed to PRIVATE_PORT pause Pause all processes within a container ps List containers pull Pull an image or a repository from the docker registry server push Push an image or a repository to the docker registry server restart Restart a running container rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save an image to a tar archive search Search for an image in the docker index start Start a stopped container stop Stop a running container tag Tag an image into a repository top Lookup the running processes of a container unpause Unpause a paused container version Show the docker version information wait Block until a container stops, then print its exit code INSTALL | apt-get install docker.io docker -d & USAGE docker info docker version docker images docker search ubuntu docker pull ubuntu docker pull ubuntu:utopic docker pull sequenceiq/hadoop-docker docker pull sequenceiq/spark docker ps docker logs docker run docker run -d docker run docker run -it aosio/ubuntu:utopic /bin/bash docker run -it aosio/sinatra /bin/bash docker run -p 50070:50070 -i -t sequenceiq/hadoop-docker /etc/bootstrap.sh -bash docker run -it -h sandbox sequenceiq/spark /etc/bootstrap.sh -bash docker inspect docker build -t aosio/memcached . To detach the tty without exiting the shell, use the escape sequence CTRL+p+q. docker run -ti ubuntu:14.04 /bin/bash -c 'ls' docker run -ti ubuntu:14.04 /bin/bash -c 'useradd -u 12345 -s /bin/bash eric; su - eric' --- docker build -t aosio/ssh:utopic . docker run -p 222:22 -i -t aosio/ssh:utopic /bin/bash ssh root@localhost -p 222 docker run -d -P --name ssh aosio/ssh:utopic docker port ssh 22 ssh root@localhost -p docker stop ssh docker rm ssh --- docker build -t sequenceiq/hadoop-docker:2.5.0 . docker commit 8dbd9e392a96 my_img docker tag 5db5f8471261 sinatra docker inspect --format="" 934df0238dd3 docker login IMAGES | ubuntu-image + https://github.com/tianon/docker-brew-ubuntu-core.git hadoop-image docker run -d -P --name="Hadoop" -h "hadoop" ruo91/hadoop:2.4.1 ssh `docker inspect -f '' Hadoop` start-all.sh jps for((i=0; i<10; i++)) do echo ${i}; done > test.log hdfs dfs -copyFromLocal test.log / hdfs dfs -ls / exit docker port Hadoop 50070 ------------ ambari-image + docker run -d -p 8080 -h amb0.mycorp.kom --name ambari-singlenode sequenceiq/ambari --tag ambari-server=true API java-api + https://github.com/jboss-fuse/fuse-docker ORCHESTRATION | + flynn https://flynn.io + deis http://deis.io + coreos http://coreos.com + Mesos http://mesosphere.io/2013/09/26/docker-on-mesos + maestro https://github.com/toscanini/maestro + Docker Openstack https://wiki.openstack.org/wiki/Docker + Paas zone within OpenStack http://www.sebastien-han.fr/blog/2013/10/31/build-a-paas-zone-within-your-openstack-cloud + shipyard http://shipyard-project.com + http://www.infoq.com/news/2013/12/futureops + http://www.slideshare.net/profyclub_ru/8-mitchell-hashimoto-hashicorp + Decentralizing Docker: How to use serf with Docker http://blog.ctl-c.io/?p=43 + http://mesosphere.io/learn/run-docker-on-mesosphere + https://github.com/mesosphere/deimos + https://github.com/mesosphere/marathon + http://www.tsuru.io + https://github.com/tsuru/docker-cluster + http://docs.tsuru.io/en/latest/provisioners/docker/schedulers.html + http://blog.tsuru.io/2014/04/04/running-tsuru-in-production-scaling-and-segregating-docker-containers + maestro-ng https://github.com/signalfuse/maestro-ng + decking http://decking.io + kubernetes https://github.com/GoogleCloudPlatform/kubernetes + projectatomic http://www.projectatomic.io + geard http://openshift.github.io/geard # Serf on Docker This is a [docker](docker.io) image and a couple of helper bash function, to work with [serf](serfdom.io). This document describe the process: - create the docker image - start a cluster of connected serf agent running in docker containers - stop/start nodes to check how membership gossip works ## Create the image ``` git clone git@github.com:sequenceiq/docker-serf.git cd docker-serf git checkout serf-only docker build -t sequenceiq/serf . ``` ## start a demo cluster run 3 docker container from the image you just built. all of them is running in the background (-d docker parameter) - **serf0** the first one doesn't joins to a cluster as he is the first - **serf<1..n>** nodes connecting to the cluster serf-start-cluster function defaults to starting 3 nodes. if you want more just add a parameter `serf-start-cluster 5` ``` # load helper bash functions serf-xxx . serf-functions # start a cluster with 3 nodes serf-start-cluster # check the running nodes docker ps ``` ## start a test node and attach it starts a new container name **serf99**, but not in the brackound, like the previous ones. you will be attached to the container, which: - joins the cluster - starts a **/bin/bash** ready to use ``` serf-test-instance # once attached to the test instance prompt changes to [bash-4.1#] serf members ``` you will see now all memebers including the test instance itself **serf99** ``` serf99.mycorp.kom 172.17.0.5:7946 alive serf1.mycorp.kom 172.17.0.3:7946 alive serf0.mycorp.kom 172.17.0.2:7946 alive serf2.mycorp.kom 172.17.0.4:7946 alive ``` ## Start/stop a node Stop one of the nodes: ``` docker stop -t 0 serf1 ``` now if you run again the `serf members` in **serf99** you will notice serf1 node marked as **failed**. note: it might take a couple of seconds, until the cluster gossips around the failure of node99. ``` serf99.mycorp.kom 172.17.0.5:7946 alive serf1.mycorp.kom 172.17.0.3:7946 failed serf0.mycorp.kom 172.17.0.2:7946 alive serf2.mycorp.kom 172.17.0.4:7946 alive ``` if you resart the node **serf1**: ``` docker start serf1 ``` It will apear again as **live**. Check it on **serf99**: ``` serf members serf99.mycorp.kom 172.17.0.5:7946 alive serf1.mycorp.kom 172.17.0.3:7946 alive serf0.mycorp.kom 172.17.0.2:7946 alive serf2.mycorp.kom 172.17.0.4:7946 alive ``` docker build -t aosio/h2o . docker run -d -p 54321:54321 aosio/h2o puppet chef ansible _______ ____ _ _ _______ | \ ___| |_ ___| |___ _ _ ___ ___ ________ | | | .'| _| .'| | .'| | | -_| _| ________ |____/|__,|_| |__,|_|__,|_ |___|_| |___| #datalayer BASH DOCUMENTATION | + http://tldp.org/ + http://www.tldp.org/guides.html + http://www.tldp.org/LDP/abs/html/index.html + http://www.tldp.org/LDP/Bash-Beginners-Guide/html/index.html useradd user1 passwd user1 ALT^arrow_left ALT^arrow_right: go to the beginning or the end of the line source file.properties #/!bin/bash username=... mysql -u $username CTRL^R: go back in history CTRL^...: go forward in history screen -list // list all the screens screen -S aq // Create a new screen screen -r aq // Join an existing screen screen -D -r '1234.somescreensession' dmesg 1. Download Ubuntu Desktop 2. Open the Terminal (in /Applications/Utilities/ or query Terminal in Spotlight). 3. Convert the .iso file to .img using the convert option of hdiutil (e.g.,hdiutil convert -format UDRW -o ~/path/to/target.img ~/path/to/ubuntu.iso) Note: OS X tends to put the .dmg ending on the output file automatically. 4. Run diskutil list to get the current list of devices. 5. Insert your flash media. 6. Run diskutil list again and determine the device node assigned to your flash media (e.g. /dev/disk2). 7. Run diskutil unmountDisk /dev/diskN (replace N with the disk number from the last command; in the previous example, N would be 2). 8. Execute sudo dd if=/path/to/downloaded.img of=/dev/rdiskN bs=1m (replace /path/to/downloaded.img with the path where the image file is located; for example, ./ubuntu.imgor ./ubuntu.dmg). Using /dev/rdisk instead of /dev/disk may be faster If you see the error dd: Invalid number '1m', you are using GNU dd. Use the same command but replace bs=1m with bs=1M If you see the error dd: /dev/diskN: Resource busy, make sure the disk is not in use. Start the 'Disk Utility.app' and unmount (don't eject) the drive 9. Run diskutil eject /dev/diskN and remove your flash media when the command completes. 10. Restart your Mac and press alt/option key while the Mac is restarting to choose the USB stick. SHELL rename 's/ACDC/AC-DC/' *.xxx SSH Host * ServerAliveInterval 120 GIT git config --global user.name "Eric Charles" git config --global user.email eric@datalayer.io --- mkdir openaos cd openaos git init touch README git add README git commit -m 'first commit' git remote add origin git@github.com:echarles/openaos.git git push origin master --- Careful: git reset --hard WILL DELETE YOUR WORKING DIRECTORY CHANGES Assuming you are sitting on that commit, then this command will wack it... git reset --hard HEAD~1 The HEAD~1 means the commit before head. Or, you could look at the output of git log, find the commit id of the commit you want to back up to, and then do this: git reset --hard --- git clone git://... git clone --depth 1 git://... --- git fetch remote branch: You need to create a local branch that tracks a remote branch. The following command will create a local branch named daves_branch, tracking the remote branch origin/daves_branch. When you push your changes the remote branch will be updated. git checkout --track origin/daves_branch OR us fetch followed by checkout ... git fetch : git checkout ... where is the remote branch or source ref and is the as yet non-existent local branch or destination ref you want to track and which you probably want to name the same as the remote branch or source ref. This is explained under options in the explanation of . --- Fetching a remote When working with other people's repositories, there are four basic Git commands you will need: git clone git fetch git merge git pull These commands all act on a repository's remote URL. Clone To grab a complete copy of another user's repository, you will use git clone, like this: git clone https://github.com/user/repo.git # Clones a repository to your computer When you run git clone, the following actions occur: A new folder called repo is made It is initialized as a Git repository All of the repository's files are downloaded there git clone checks out the default branch (usually called master) git clone creates a remote named origin, pointing to the URL you cloned from You can choose from several different URLs when cloning a repository. While logged in to GitHub, these URLs are available in the sidebar: Remote url list Fetch Fetching from a repository grabs all the new branches and tags without copying those changes into your repository. You'd use git fetch to look for updates made by other people. If you already have a local repository with a remote URL set up for the desired project, you can grab all the new information by using git fetch remotename in the terminal: git fetch remotename # Fetches updates made to an online repository Otherwise, you can always add a new remote. Merge Merging combines your local changes with changes made by others. Typically, you'd merge a branch on your online repository with your local branch: git merge remotename/branchname # Merges updates made online with your local work Pull git pull is a convenient shortcut for completing both git fetch and git mergein the same command: git pull remotename/branchname # Grabs online updates and merges them with your local work Because pull performs a merge on the retrieved changes, you should ensure that your local work is committed before running the pull command. If you run into a merge conflict you cannot resolve, or if you decide to quit the merge, you can use git merge --abort to take the branch back to where it was in before you pulled. --- git pull upstream --- git checkout -b git checkout = git branch ; git checkout ; git pull origin git checkout -b origin/ git checkout master git merge git branch -a git branch -m old_branch new_branch git branch -D git push origin :branch #delete remote branch in origin git push origin --delete --- git log -- [filename] gitk [filename] --- git fetch --tag git log -p filename git checkout -b tag_name tag_name --- Before you can start working locally on a remote branch, you need to fetch it as called out in answers below. To fetch a branch, you simply need to: git fetch origin This will fetch all of the remote branches for you. You can see the branches available for checkout with: git branch -v -a With the remote branches in hand, you now need to check out the branch you are interested in, giving you a local working copy: git checkout -b test origin/test EDIT - The answer below actually improves on this. On Git>=1.6.6 you can just do: git fetch git checkout test --- git fetch upstream git checkout master git reset --hard upstream/master git push origin master --force ---f git fetch git checkout -b branch_name branch_name git branch --set-upstream-to=upstream/branch_name branch_name Given a branch foo and a remote upstream: As of Git 1.8.0: git branch -u upstream/foo Or, if local branch foo is not the current branch: git branch -u upstream/foo foo Or, if you like to type longer commands, these are equivalent to the above two: git branch --set-upstream-to=upstream/foo git branch --set-upstream-to=upstream/foo foo As of Git 1.7.0: git branch --set-upstream foo upstream/foo Notes:All of the above commands will cause local branch foo to track remote branch foo from remote upstream. The old (1.7.x) syntax is deprecated in favor of the new (1.8+) syntax. The new syntax is intended to be more intuitive and easier to remember. --- git show af60e1012d9d3f41bef1db62aff3ab49c040e2fb --- git checkout git checkout file/to/restore git checkout ~1 file/to/restore --- git remote add origin git@github.com:echarles/openaos.git git push origin master --- git remote add upstream git://... git fetch upstream git merge upstream master # if fatal: 'upstream' does not point to a commit > git pull upstream master git push origin master --- git merge upstream/master ? --- git lfs install git lfs track "*.pdf" --- mkdir test git init --bare git remote rm origin git remote add origin git@aos.be:test git push origin master git remote show origin git diff --no-prefix --staged --- git diff master..branch --- git squash git cherry-pick --- git whatchanged git log --name-status git log --name-only git log --stat --- git show git diff ^ --- git reset HEAD . git reset HEAD^ . --- If you want to retrieve a file in your history and if you know the path the file was at, you can do this: git log -- /path/to/file This should show a list of commits which touched that file. Then, you can find the version of the file you want, and display it with... git show -- /path/to/file (or restore it into your working copy with git checkout -- /path/to/file) -------------- GIT COMPLETION -------------- https://github.com/git/git/tree/master/contrib/completion --- Git Auto Completion: Execute the following in your terminal: cd ~ curl https://github.com/git/git/raw/master/contrib/completion/git-completion.bash -OL vim .bash_profile # add the following line: source ~/git-completion.bash # go back to terminal and execute: source .bash_profile Now, hitting tab will autocomplete your git commands, including branch names, e.g.: git checkout shows you the available branches and tags git checkout fix-2 completes it to git checkout fix-29237810012 ---------- GIT CLIENT ---------- git on linux + gitg + giggle + gitk + git-cola svn co https://svn.github.com/echarles/openaos.git openaos ---------- GIT SERVER ---------- http://tumblr.intranation.com/post/766290565/how-set-up-your-own-private-git-server-linux How to set up your own private Git server on Linux Update 2: as pointed out by Tim Huegdon, several comments on a Hacker News thread pointing here, and the excellent Pro Git book, Gitolite seems to be a better solution for multi-user hosted Git than Gitosis. I particularly like the branch–level permissions aspect, and what that means for business teams. I’ve left the original article intact. Update: the ever–vigilant Mike West has pointed out that my instructions for permissions and git checkout were slightly askew. These errors have been rectified. One of the things I’m attempting to achieve this year is simplifying my life somewhat. Given how much of my life revolves around technology, a large part of this will be consolidating the various services I consume (and often pay for). The mention of payment is important, as up until now I’ve been paying the awesome GitHub for their basic plan. I don’t have many private repositories with them, and all of them are strictly private code (this blog; Amanda’s blog templates and styles; and some other bits) which don’t require collaborators. For this reason, paying money to GitHub (awesome though they may be) seemed wasteful. So I decided to move all my private repositories to my own server. This is how I did it. Set up the server These instructions were performed on a Debian 5 “Lenny” box, so assume them to be the same on Ubuntu. Substitute the package installation commands as required if you’re on an alternative distribution. First, if you haven’t done so already, add your public key to the server: ssh myuser@server.com mkdir .ssh scp ~/.ssh/id_rsa.pub myuser@server.com:.ssh/authorized_keys Now we can SSH into our server and install Git: ssh myserver.com sudo apt-get update sudo apt-get install git-core …and that’s it. Adding a user If you intend to share these repositories with any collaborators, at this point you’ll either: Want to install something like Gitosis (outside the scope of this article, but this is a good, if old, tutorial); or Add a “shared” Git user. We’ll be following the latter option. So, add a Git user: sudo adduser git Now you’ll need to add your public key to the Git user’s authorized_keys: sudo mkdir /home/git/.ssh sudo cp ~/.ssh/authorized_keys /home/git/.ssh/ sudo chown -R git:git /home/git/.ssh sudo chmod 700 !$ sudo chmod 600 /home/git/.ssh/* Now you’ll be able to authenticate as the Git user via SSH. Test it out: ssh git@myserver.com Add your repositories If you were to not share the repositories, and just wanted to access them for yourself (like I did, since I have no collaborators), you’d do the following as yourself. Otherwise, do it as the Git user we added above. If using the Git user, log in as them: login git Now we can create our repositories: mkdir myrepo.git cd !$ git --bare init The last steps creates an empty repository. We’re assuming you already have a local repository that you just want to push to a remote server. Repeat that last step for each remote Git repository you want. Log out of the server as the remaining operations will be completed on your local machine. Configure your development machine First, we add the remotes to your local machine. If you’ve already defined a remote named origin (for example, if you followed GitHub’s instructions), you’ll want to delete the remote first: git remote rm origin Now we can add our new remote: git remote add origin git@server.com:myrepo.git git push origin master And that’s it. You’ll probably also want to make sure you add a default merge and remote: git config branch.master.remote origin && git config branch.master.merge refs/heads/master And that’s all. Now you can push/pull from origin as much as you like, and it’ll be stored remotely on your own myserver.com remote repository. Bonus points: Make SSH more secure This has been extensively covered by the excellent Slicehost tutorial, but just to recap: Edit the SSH config: sudo vi /etc/ssh/sshd_config And change the following values: Port 2207 ... PermitRootLogin no ... AllowUsers myuser git ... PasswordAuthentication no Where 2207 is a port of your choosing. Make sure to add this so your Git remote: git remote add origin ssh://git@myserver.com:2207/~/myrepo.git SVN svn help usage: svn [options] [args] Subversion command-line client, version 1.6.15. Type 'svn help ' for help on a specific subcommand. Type 'svn --version' to see the program version and RA modules or 'svn --version --quiet' to see just the version number. Most subcommands take file and/or directory arguments, recursing on the directories. If no arguments are supplied to such a command, it recurses on the current directory (inclusive) by default. Available subcommands: add blame (praise, annotate, ann) cat changelist (cl) checkout (co) cleanup commit (ci) copy (cp) delete (del, remove, rm) diff (di) export help (?, h) import info list (ls) lock log merge mergeinfo mkdir move (mv, rename, ren) propdel (pdel, pd) propedit (pedit, pe) propget (pget, pg) proplist (plist, pl) propset (pset, ps) resolve resolved revert status (stat, st) switch (sw) unlock update (up) Changesets Before we proceed further, we should warn you that there's going to be a lot of discussion of “changes” in the pages ahead. A lot of people experienced with version control systems use the terms “change” and “changeset” interchangeably, and we should clarify what Subversion understands as a changeset. Everyone seems to have a slightly different definition of changeset, or at least a different expectation of what it means for a version control system to have one. For our purposes, let's say that a changeset is just a collection of changes with a unique name. The changes might include textual edits to file contents, modifications to tree structure, or tweaks to metadata. In more common speak, a changeset is just a patch with a name you can refer to. In Subversion, a global revision number N names a tree in the repository: it's the way the repository looked after the Nth commit. It's also the name of an implicit changeset: if you compare tree N with tree N−1, you can derive the exact patch that was committed. For this reason, it's easy to think of revision N as not just a tree, but a changeset as well. If you use an issue tracker to manage bugs, you can use the revision numbers to refer to particular patches that fix bugs—for example, “this issue was fixed by r9238.” Somebody can then run svn log -r 9238 to read about the exact changeset that fixed the bug, and run svn diff -c 9238 to see the patch itself. And (as you'll see shortly) Subversion's svn merge command is able to use revision numbers. You can merge specific changesets from one branch to another by naming them in the merge arguments: passing -c 9238 to svn merge would merge changeset r9238 into your working copy. svn propset svn:externals "eggtoolpalette -r853 http://svn.gnome.org/svn/libegg/trunk/libegg/toolpalette/" . svn commit -m "Added eggtoolpalette" svn log | more svn up svn up -rXXXX svn diff -r 63:64 . MANAGEMENT | + BASH sort uniq wc wc -l ls -lh list=`*.csv` for file in $list do cat $file >> new_file.csv cat -vet done $table=yourtable hive -e "load data local inpath '$file' into table $table" cat *.csv > output.csv netstat -npl netstat -nr netstat -a -t --numeric-ports -p sockstat -l | grep sshd jflex flex lex chmod -R 755 . # default permission tty script -a /dev/pts/1 xmllint $ cat /proc/meminfo $ less /proc/meminfo $ more /proc/meminfo $ egrep --color 'Mem|Cache|Swap' /proc/meminfo Sample outputs: MemTotal: 8120568 kB MemFree: 2298932 kB Cached: 1907240 kB SwapCached: 0 kB SwapTotal: 15859708 kB SwapFree: 15859708 kB $ free -m command | tee file w3m lspci lsusb dmesg |grep eth0 more /etc/fstab fdisk -l du -hs /path/to/directory | sort df -h Usually I will put -h to make it size human readable. Another good tools to check the disk space for directories, we use du. You may realized that when you type ls -l, the size of every directories have the same size 4096, it is because directories is actually a file. But for us, we want to know how large the load it store instead the directory file itself. To show all directories size including sub directories, type du -h To calculate the current directory size you are in (-s stand for summary) du -sh To show all the 1 level sub directories size (which you are not interested at sub sub directories.) du -sh * To show the size of specific directory du -sh /home To show the size of all sub directories of a specific directory du -sh /home/* KERNEL | + process ls /proc//... netstat -lnap mount pidof ... top -p pid htop ... F5 pidstat -d -p ALL 5 10 ps -auxww ps -axfus strace ltrace renice top mtr dig +trace hostname traceroute file descriptor output types (stdout1 2 and stderr3) strace echo "1" > /dev/null 1>&2 + file stat time ls -R / > /dev/null (do it twice) + io iostat iostat -x 1 vmstat 1 netstat -l -p sockstat -4 -l | grep :80 | awk '{print $3}' | head -1 time smtp-source -A -C1500 -l 100 -m 100000 -s 500 -d -c -f nm@test.de -t te 213.157.22.218:25 time smtp-source -L -s 40 -m 100 -l 4096 -d -c -f me@elasticinbox.com -t test@elasticinbox.com ElasticInbox-LB-1070648408.eu-west-1.elb.amazonaws.com:2400 for i in `seq -w 1 1000`; do lsof -a -u dweiss -c java > snap.$i; sleep 5; done find queue-jms/src/test/ -name *.java -print | xargs sed -i 's/\t/ /g' find /tmp/ -name 'aos-bu-*' -print0 | xargs -0 rm -fr tr 'A-Z' 'a-z' < subtitles_124.txt | tr -sc 'A-Za-z' '\n' | sort | less | uniq -c | sort -n -r | less tr ";" "," < in.csv | tr "\"" "" > out.csv echo $? tar xvfj *.bz2 tar xvfz .tar.gz locate file bzcat stackoverflow.com-Posts.7z | hdfs dfs -put - /user/srowen/Posts.xml patch -p0 --dry-run < file.patch ubuntu startup scripts vi /etc/init.d