This area is my sharing area. Include some technical/believing/life experiences. Hope you can enjoy with me. God bless you. :)
2008年10月29日 星期三
osip2 experience
Some state definitions in the osip2 header file
ICT_XXXX means invite client transaction
IST_XXXX means invite server transaction
NICT_XXXX means NON-invite client transaction
NIST_XXXX means NON-invite server transaction
defined in the libosip2-3.1.0/include/osip2/osip.h
ICT_XXXX means invite client transaction
IST_XXXX means invite server transaction
NICT_XXXX means NON-invite client transaction
NIST_XXXX means NON-invite server transaction
defined in the libosip2-3.1.0/include/osip2/osip.h
2008年10月27日 星期一
ubuntu apt-get experience
apt-get install screen
The following description is a snapshot from the http://linuxhelp.blogspot.com/2005/12/concise-apt-get-dpkg-primer-for-new.html
December 13, 2005
A Concise apt-get / dpkg primer for new Debian users
Debian is one of the earliest Linux distribution around. It caught the public's fancy because of the ease of installing and uninstalling applications on it. When many other linux distributions were bogged down in dependency hell, Debian users were shielded from these problems owing to Debian's superior package handling capablities using apt-get.
All Linux distributions which claim their roots in the Debian distribution use this versatile package manager. For the uninitiated, Debian uses the deb package format for bundling together files belonging to an application. You can look at it as something like a setup installer (Eg: Installshield) in windows counterpart.
Here I will explain how to go about using this package handling utility to get the results that you desire.
The first step needed to use apt-get to your advantage is including the necessary repositories. Repositories are merely collections of softwares which are stored in a public location on the internet. By including the web address of these repositories, you are directing apt-get to search these locations for the desired software. You use the /etc/apt/sources.list file to list the addresses of the repositories. It takes the following format:
deb [web address] [distribution name][maincontribnon-free]
For example, in Ubuntu a debian based distribution, it could be something like this:
deb http://in.archive.ubuntu.com/ubuntu breezy main restrcted
You can add any repository you like. apt-get.org contains an excellent collection of repositories to suite all tastes.
Once you have set the repositories, the next step is to sync the local software database with the database on the repositories. This will cache a copy of the list of all the remotely available softwares to your machine. This is achieved by running the following command:
# apt-get update
An advantage of this is you now have the power to search for a particular program to see if it is available for your version of distribution using the apt-cache command. And you don't need a net connection to do this. For example,
# apt-cache search baseutils
... will tell me if the package baseutils is available in the repository or not by searching the locally cached copy of the database.
Once you have figured that the package (in our case baseutils) is available, then installing it is as simple as running the following command:
# apt-get install baseutils
The real power of apt-get is realised now. If the baseutils package depends on the availability of a version of the library say, "xyz1.5.6.so". Then apt-get will download the library (or package containing the library) from the net and install it before installing baseutils package. This is known as automatic dependency resolution.
And removing a package is as simple as running the command:
# apt-get remove baseutils
Get statistics about the packages available in the repositories by running the command :
# apt-cache stats
Total package names : 22502 (900k)
Normal packages: 17632
Pure virtual packages: 281
Single virtual packages: 1048
Mixed virtual packages: 172
Missing: 3369
...
To upgrade all the softwares on your system to the latest versions, do the following:
# apt-get upgrade
And finally the king of them all - upgrading the whole distribution to a new version can be done with the command:
# apt-get dist-upgrade
Saving valuable hard disk space
Each time you install an application using apt-get, the package is actually cached in a location on your hard disk. It is usually stored in the location /var/cache/apt/archives/ . Over a period of time, all the cached packages will eat up your valuable hard disk space. You can clear the cache and release hard disk space by using the following command:
# apt-get clean
You could also use autoclean where in, only those packages in the cache which are found useless or partially complete are deleted.
# apt-get autoclean
dpkg - The low level Package management utility
As I said earlier, Debian based distributions use the Deb package format. Usually normal users like you and me are shielded from handling individual deb packages. But if you fall into a situation where you have to install a deb package you use the dpkg utility.
Lets assume I have a deb package called gedit-2.12.1.deb and I want to install it on my machine. I do it using the following command:
# dpkg -i gedit-2.12.1.deb
To remove an installed package, run the command:
# dpkg -r gedit
The main thing to note above is I have used only the name of the program and not the version number while removing the software.
You may also use the --purge (-P) flag for removing software.
# dpkg -P gedit
This will remove gedit along with all its configuration files. Where as -r (--remove) does not delete the configuration files.
Now lets say I do not want to actually install a package but want to see the contents of a Deb package. This can be achieved using the -c flag:
# dpkg -c gedit-2.12.1.deb
To get more information about a package like the authors name,the year in which it was compiled and a short description of its use, you use the -I flag:
# dpkg -I gedit-2.12.1.deb
You can even use wild cards to list the packages on your machine. For example, to see all the gcc packages on your machine, do the following:
# dpkg -l gcc*
Desired=Unknown/Install/Remove/Purge/Hold
Status=Not/Installed/Config-files/Unpacked/Failed-config/.
/ Err?=(none)/Hold/Reinst-required/X=both-problems
/ Name Version Description
+++-===============-==============-========================
ii gcc 4.0.1-3 The GNU C compiler
ii gcc-3.3-base 3.3.6-8ubuntu1 The GNU Compiler Colletio
un gcc-3.5 none (no description available)
un gcc-3.5-base none (no description available)
un gcc-3.5-doc none (no description available)
ii gcc-4.0 4.0.1-4ubuntu9 The GNU C compiler
...
In the above listing, the first 'i' denotes desired state which is install. The second 'i' denotes the actual state ie gcc is installed. The third column gives the error problems if any. The fourth, fifth and sixth column gives the name, version and description of the packages respectively. And gcc-3.5 is not installed on my machine. So the status is given as 'un' which is unknown not-installed.
To check if an individual package is installed, you use the status -s flag:
# dpkg -s gedit
Two days back, I installed beagle (a real time search tool based on Mono) on my machine. But I didn't have a clue about the location of the files as well as what files were installed along with beagle. That was when I used the -L option to get a list of all the files installed by the beagle package.
# dpkg -L beagle
Even better, you can combine the above command with grep to get a listing of all the html documentation of beagle.
# dpkg -L beagle | grep html$
These are just a small sample of the options you can use with dpkg utility. To know more about this tool, check its man page.
If you are alergic to excessive command line activities, then you may also use dselect which is a curses based menu driven front-end to the low level dpkg utility.
GUI front-ends for apt-get
* Synaptic
* Aptitude
The following description is a snapshot from the http://linuxhelp.blogspot.com/2005/12/concise-apt-get-dpkg-primer-for-new.html
December 13, 2005
A Concise apt-get / dpkg primer for new Debian users
Debian is one of the earliest Linux distribution around. It caught the public's fancy because of the ease of installing and uninstalling applications on it. When many other linux distributions were bogged down in dependency hell, Debian users were shielded from these problems owing to Debian's superior package handling capablities using apt-get.
All Linux distributions which claim their roots in the Debian distribution use this versatile package manager. For the uninitiated, Debian uses the deb package format for bundling together files belonging to an application. You can look at it as something like a setup installer (Eg: Installshield) in windows counterpart.
Here I will explain how to go about using this package handling utility to get the results that you desire.
The first step needed to use apt-get to your advantage is including the necessary repositories. Repositories are merely collections of softwares which are stored in a public location on the internet. By including the web address of these repositories, you are directing apt-get to search these locations for the desired software. You use the /etc/apt/sources.list file to list the addresses of the repositories. It takes the following format:
deb [web address] [distribution name][maincontribnon-free]
For example, in Ubuntu a debian based distribution, it could be something like this:
deb http://in.archive.ubuntu.com/ubuntu breezy main restrcted
You can add any repository you like. apt-get.org contains an excellent collection of repositories to suite all tastes.
Once you have set the repositories, the next step is to sync the local software database with the database on the repositories. This will cache a copy of the list of all the remotely available softwares to your machine. This is achieved by running the following command:
# apt-get update
An advantage of this is you now have the power to search for a particular program to see if it is available for your version of distribution using the apt-cache command. And you don't need a net connection to do this. For example,
# apt-cache search baseutils
... will tell me if the package baseutils is available in the repository or not by searching the locally cached copy of the database.
Once you have figured that the package (in our case baseutils) is available, then installing it is as simple as running the following command:
# apt-get install baseutils
The real power of apt-get is realised now. If the baseutils package depends on the availability of a version of the library say, "xyz1.5.6.so". Then apt-get will download the library (or package containing the library) from the net and install it before installing baseutils package. This is known as automatic dependency resolution.
And removing a package is as simple as running the command:
# apt-get remove baseutils
Get statistics about the packages available in the repositories by running the command :
# apt-cache stats
Total package names : 22502 (900k)
Normal packages: 17632
Pure virtual packages: 281
Single virtual packages: 1048
Mixed virtual packages: 172
Missing: 3369
...
To upgrade all the softwares on your system to the latest versions, do the following:
# apt-get upgrade
And finally the king of them all - upgrading the whole distribution to a new version can be done with the command:
# apt-get dist-upgrade
Saving valuable hard disk space
Each time you install an application using apt-get, the package is actually cached in a location on your hard disk. It is usually stored in the location /var/cache/apt/archives/ . Over a period of time, all the cached packages will eat up your valuable hard disk space. You can clear the cache and release hard disk space by using the following command:
# apt-get clean
You could also use autoclean where in, only those packages in the cache which are found useless or partially complete are deleted.
# apt-get autoclean
dpkg - The low level Package management utility
As I said earlier, Debian based distributions use the Deb package format. Usually normal users like you and me are shielded from handling individual deb packages. But if you fall into a situation where you have to install a deb package you use the dpkg utility.
Lets assume I have a deb package called gedit-2.12.1.deb and I want to install it on my machine. I do it using the following command:
# dpkg -i gedit-2.12.1.deb
To remove an installed package, run the command:
# dpkg -r gedit
The main thing to note above is I have used only the name of the program and not the version number while removing the software.
You may also use the --purge (-P) flag for removing software.
# dpkg -P gedit
This will remove gedit along with all its configuration files. Where as -r (--remove) does not delete the configuration files.
Now lets say I do not want to actually install a package but want to see the contents of a Deb package. This can be achieved using the -c flag:
# dpkg -c gedit-2.12.1.deb
To get more information about a package like the authors name,the year in which it was compiled and a short description of its use, you use the -I flag:
# dpkg -I gedit-2.12.1.deb
You can even use wild cards to list the packages on your machine. For example, to see all the gcc packages on your machine, do the following:
# dpkg -l gcc*
Desired=Unknown/Install/Remove/Purge/Hold
Status=Not/Installed/Config-files/Unpacked/Failed-config/.
/ Err?=(none)/Hold/Reinst-required/X=both-problems
/ Name Version Description
+++-===============-==============-========================
ii gcc 4.0.1-3 The GNU C compiler
ii gcc-3.3-base 3.3.6-8ubuntu1 The GNU Compiler Colletio
un gcc-3.5 none (no description available)
un gcc-3.5-base none (no description available)
un gcc-3.5-doc none (no description available)
ii gcc-4.0 4.0.1-4ubuntu9 The GNU C compiler
...
In the above listing, the first 'i' denotes desired state which is install. The second 'i' denotes the actual state ie gcc is installed. The third column gives the error problems if any. The fourth, fifth and sixth column gives the name, version and description of the packages respectively. And gcc-3.5 is not installed on my machine. So the status is given as 'un' which is unknown not-installed.
To check if an individual package is installed, you use the status -s flag:
# dpkg -s gedit
Two days back, I installed beagle (a real time search tool based on Mono) on my machine. But I didn't have a clue about the location of the files as well as what files were installed along with beagle. That was when I used the -L option to get a list of all the files installed by the beagle package.
# dpkg -L beagle
Even better, you can combine the above command with grep to get a listing of all the html documentation of beagle.
# dpkg -L beagle | grep html$
These are just a small sample of the options you can use with dpkg utility. To know more about this tool, check its man page.
If you are alergic to excessive command line activities, then you may also use dselect which is a curses based menu driven front-end to the low level dpkg utility.
GUI front-ends for apt-get
* Synaptic
* Aptitude
Apache, MySQL service
# mysqld and apached is the same usage in the linux
/usr/local/mysql/bin/mysqld_safe &
/usr/local/apache2/bin/apachectl start
Q. How to disable httpd port 443 listening?
In the /etc/httpd/conf.d folder, there are many modules configure file. One config file named ssl.conf is the https setting.
Mark the line of "Listen 443", and then the httpd launch will not bind on the port 443 anymore.
PS. /etc/httpd/conf/httpd.conf is the main apache config file.
/etc/httpd/conf.d/ contains all the apache sub-module config files.
/sbin/ipnat -f /etc/ipnat.conf
/sbin/sysctl -w net.inet.ip.forwarding=1
/usr/share/denyhosts/daemon-control start
/usr/local/mysql/bin/mysqld_safe &
/usr/local/apache2/bin/apachectl start
Q. How to disable httpd port 443 listening?
In the /etc/httpd/conf.d folder, there are many modules configure file. One config file named ssl.conf is the https setting.
Mark the line of "Listen 443", and then the httpd launch will not bind on the port 443 anymore.
PS. /etc/httpd/conf/httpd.conf is the main apache config file.
/etc/httpd/conf.d/ contains all the apache sub-module config files.
/sbin/ipnat -f /etc/ipnat.conf
/sbin/sysctl -w net.inet.ip.forwarding=1
/usr/share/denyhosts/daemon-control start
標籤:
Linux Service
Using vMware server unable to connect to the server with vmware server daemon
We can using netstat to show that vMware server daemon is listening on the port 904 using tcp
# netstat -anp | grep 904 tcp 0 0 :::904 :::* LISTEN 3786/xinetd
And we also disable the iptables daemon
# service iptables stop
Flush all the iptables rules in the filter table
# iptables -F
But we still can not use vmware server to login into the destined machine. Finally I found the problem is caused by the routing settings. Because the destined machine has a interface configued the ip-address the same subnet with the connecting machine. So the returned packet will route the different path with the forward path. So the vMware will show the destination is unreachable like the attached diagram.
2008年10月22日 星期三
Linux Command - awk experience
Example1.
#/etc/links1 檔案內容
WAN1=rtk0
WAN2=rtk1
LAN1=rtk2
DMZ1=rtk3
DMZ2=rtk4
WAN3=rtk5
計算總行數
# gawk '/WAN/{count++}END{print count}' /etc/links1
3
Example2.
用"="號來區隔出資料變數,並存在var陣列中
# gawk 'BEGIN{split("DMZ=rtk3",var,"=");print var[2]}'
rtk3
awk預設的分隔符號為space(空白鍵),在這邊將其預設欄位分隔符號改為"=",再取出該檔案的第2欄資料
# gawk -F'=' '{ print $2 }' /etc/links
rtk0
rtk1
rtk2
rtk3
rtk4
rtk5
Example3.
將/etc/ifconfig.rtk0的第三欄資料印出
#gawk '{print $3}' /etc/ifconfig.rtk0`
Show the line if the packets count is non zero
iptables -t mangle -L -v -n | awk '$1!=0 {print $0}'
Chain Chain1 (1 references)
pkts bytes target prot opt in out source destination
1 78 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 LAYER7 l7proto nbns
2 99 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 LAYER7 l7proto rdp
4 226 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 LAYER7 l7proto telnet
$ ls -la | awk '($1 = /-rw-r--r--/ ){print $9}'
aa.gif
Example.
file aaa 的內容
Total run 5 times
ftp_0.pat allchar-sets 0 printchar-sets 0
ftp_1.pat allchar-sets 0 printchar-sets 0
ftp_2.pat allchar-sets 0 printchar-sets 0
ftp_3.pat allchar-sets 5 printchar-sets 0
msn_0.pat allchar-sets 1 printchar-sets 1
msn_1.pat allchar-sets 0 printchar-sets 0
msn_2.pat allchar-sets 0 printchar-sets 0
msn_3.pat allchar-sets 0 printchar-sets 6
msn_4.pat allchar-sets 0 printchar-sets 0
msn_5.pat allchar-sets 0 printchar-sets 0
取出第三欄或是第五欄內容不為0 的records
# cat aaa | awk 'NR>1 && ($3!=0 || $5!=0){print $0}'
ftp_3.pat allchar-sets 5 printchar-sets 0
msn_0.pat allchar-sets 1 printchar-sets 1
msn_3.pat allchar-sets 0 printchar-sets 6
每隔3秒鐘, 印/proc/slabinfo 中的某些資料
# while [ 1 ]; do cat /proc/slabinfo | awk '/\/{print $2, $3}' | tail -1; sleep 3; done
/flash/etc/system.conf 內容
PASSTHROUGHPKT=5000
PASSTHROUGHBYTE=1000000
#RANDOMRATE=40
# awk -F "=" '/^PASSTHROUGHBYTE/{print $2}' /flash/etc/system.conf
1000000
VER= $(shell awk -F"[:|@]" '{ print $$3 }' CVS/Root)
$ cat CVS/Root
:ext:vincent@192.168.17.190:/home/cvsroot
$ cat CVS/Root | awk -F"[:|@]" '{ print $3 }' // 這裡awk -F 的separator 利用 [:|@] or 的符號
my_cvs_username
HOST_IP=$(shell ifconfig eth0 | grep "inet addr" | awk -F: '{print $$2}' | awk '{print $$1}')
$ ifconfig eth0
Warning: cannot open /proc/net/dev (Permission denied). Limited output.
eth0 Link encap:Ethernet HWaddr 00:0E:A6:44:ED:A7
inet addr:192.168.211.18 Bcast:192.168.211.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:209 Memory:fbffc000-0
STARTTIME= $(shell LC_ALL=C date )
LC_ALL=C 的目的是啥? :(
這裡我比較不懂的地方有
1. 在Makefile 裡面, 需要利用shell 來做shell command execution, 格式如下
$(shell command-list)
command list 就是原先在shell 理面執行的commands
2. 在Makefile 裡面的引數都要改成 $$1, $$2, $$3 (尤其是在shell command 當中), 原因是什麼, 我現在還不是很清楚
ls | awk '{print "echo filename:"$1" ; tcpdump -nv host 207.46.2.152 -r "$1}' | sh
ls | xargs -n 1 tcpdump -nv host 207.46.2.152 -r
ls -cr --sort=t | head -1 | xargs -n 1 md5sum
ls -lacr --sort=t | head -2 | awk 'NR!=1{print $8}'
InstantScan-10-prophet.bin
[vincent@CMF tftpboot]$ ls -lacr --sort=t | head -2 | awk 'NR!=1{print $8}' | md5sum // 這時處理的不是檔案, 是"字串"
eb1c42d6d509de2d02c6d426a6b74934 -
[vincent@CMF tftpboot]$ md5sum InstantScan-10-prophet.bin
c98bbabbd0d22cdc15b37d9759b78ba4 InstantScan-10-prophet.bin
ls -ur --sort=t // sort by access time, 由oldest time to the newest time (sorting descending)
ls -cr --sort=t // sort by last modification time, 由oldest time to the newest time (sorting descending)
ls -u --sort=t // sort by access time, 由newest time to the oldest time (sorting ascending)
ls -c --sort=t // sort by last modification time, 由newest time to the oldest time (sorting ascending)
// 測試 ok
ls -lcr --sort=t | head -100 | awk 'NR!=1{print $8}' | sudo xargs -n 1 rm -f
Some awk simple command
# more /etc/ifconfig.rtk0 | awk '{print $1}'
192.168.17.205
# gawk '{print $2}' /etc/ifconfig.rtk0
192.168.17.173
# cat /etc/ifconfig.rtk0 | gawk '{print $2}'
192.168.17.173
# more /etc/ifconfig.rtk0 | gawk '{print $2}'
192.168.17.173
定時顯示snortinline PID & time value
#!/bin/sh
while (true)
do
ps aux | grep snort_inline | grep -v grep | awk '{printf "%s ", $1}'; date | awk '{print $4}'
sleep 5;
done
結果
29205 11:28:59
29205 11:29:04
29205 11:29:09
29205 11:29:14
29205 11:29:19
29205 11:29:24
29205 11:29:29
29205 11:29:34
29205 11:29:39
29205 11:29:44
while [ 1 ]; do ps aux | grep snort | grep -v grep | awk '{print $1}'; sleep 20; done
#/etc/links1 檔案內容
WAN1=rtk0
WAN2=rtk1
LAN1=rtk2
DMZ1=rtk3
DMZ2=rtk4
WAN3=rtk5
計算總行數
# gawk '/WAN/{count++}END{print count}' /etc/links1
3
Example2.
用"="號來區隔出資料變數,並存在var陣列中
# gawk 'BEGIN{split("DMZ=rtk3",var,"=");print var[2]}'
rtk3
awk預設的分隔符號為space(空白鍵),在這邊將其預設欄位分隔符號改為"=",再取出該檔案的第2欄資料
# gawk -F'=' '{ print $2 }' /etc/links
rtk0
rtk1
rtk2
rtk3
rtk4
rtk5
Example3.
將/etc/ifconfig.rtk0的第三欄資料印出
#gawk '{print $3}' /etc/ifconfig.rtk0`
Show the line if the packets count is non zero
iptables -t mangle -L -v -n | awk '$1!=0 {print $0}'
Chain Chain1 (1 references)
pkts bytes target prot opt in out source destination
1 78 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 LAYER7 l7proto nbns
2 99 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 LAYER7 l7proto rdp
4 226 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 LAYER7 l7proto telnet
$ ls -la | awk '($1 = /-rw-r--r--/ ){print $9}'
aa.gif
Example.
file aaa 的內容
Total run 5 times
ftp_0.pat allchar-sets 0 printchar-sets 0
ftp_1.pat allchar-sets 0 printchar-sets 0
ftp_2.pat allchar-sets 0 printchar-sets 0
ftp_3.pat allchar-sets 5 printchar-sets 0
msn_0.pat allchar-sets 1 printchar-sets 1
msn_1.pat allchar-sets 0 printchar-sets 0
msn_2.pat allchar-sets 0 printchar-sets 0
msn_3.pat allchar-sets 0 printchar-sets 6
msn_4.pat allchar-sets 0 printchar-sets 0
msn_5.pat allchar-sets 0 printchar-sets 0
取出第三欄或是第五欄內容不為0 的records
# cat aaa | awk 'NR>1 && ($3!=0 || $5!=0){print $0}'
ftp_3.pat allchar-sets 5 printchar-sets 0
msn_0.pat allchar-sets 1 printchar-sets 1
msn_3.pat allchar-sets 0 printchar-sets 6
每隔3秒鐘, 印/proc/slabinfo 中的某些資料
# while [ 1 ]; do cat /proc/slabinfo | awk '/\
/flash/etc/system.conf 內容
PASSTHROUGHPKT=5000
PASSTHROUGHBYTE=1000000
#RANDOMRATE=40
# awk -F "=" '/^PASSTHROUGHBYTE/{print $2}' /flash/etc/system.conf
1000000
VER= $(shell awk -F"[:|@]" '{ print $$3 }' CVS/Root)
$ cat CVS/Root
:ext:vincent@192.168.17.190:/home/cvsroot
$ cat CVS/Root | awk -F"[:|@]" '{ print $3 }' // 這裡awk -F 的separator 利用 [:|@] or 的符號
my_cvs_username
HOST_IP=$(shell ifconfig eth0 | grep "inet addr" | awk -F: '{print $$2}' | awk '{print $$1}')
$ ifconfig eth0
Warning: cannot open /proc/net/dev (Permission denied). Limited output.
eth0 Link encap:Ethernet HWaddr 00:0E:A6:44:ED:A7
inet addr:192.168.211.18 Bcast:192.168.211.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:209 Memory:fbffc000-0
STARTTIME= $(shell LC_ALL=C date )
LC_ALL=C 的目的是啥? :(
這裡我比較不懂的地方有
1. 在Makefile 裡面, 需要利用shell 來做shell command execution, 格式如下
$(shell command-list)
command list 就是原先在shell 理面執行的commands
2. 在Makefile 裡面的引數都要改成 $$1, $$2, $$3 (尤其是在shell command 當中), 原因是什麼, 我現在還不是很清楚
ls | awk '{print "echo filename:"$1" ; tcpdump -nv host 207.46.2.152 -r "$1}' | sh
ls | xargs -n 1 tcpdump -nv host 207.46.2.152 -r
ls -cr --sort=t | head -1 | xargs -n 1 md5sum
ls -lacr --sort=t | head -2 | awk 'NR!=1{print $8}'
InstantScan-10-prophet.bin
[vincent@CMF tftpboot]$ ls -lacr --sort=t | head -2 | awk 'NR!=1{print $8}' | md5sum // 這時處理的不是檔案, 是"字串"
eb1c42d6d509de2d02c6d426a6b74934 -
[vincent@CMF tftpboot]$ md5sum InstantScan-10-prophet.bin
c98bbabbd0d22cdc15b37d9759b78ba4 InstantScan-10-prophet.bin
ls -ur --sort=t // sort by access time, 由oldest time to the newest time (sorting descending)
ls -cr --sort=t // sort by last modification time, 由oldest time to the newest time (sorting descending)
ls -u --sort=t // sort by access time, 由newest time to the oldest time (sorting ascending)
ls -c --sort=t // sort by last modification time, 由newest time to the oldest time (sorting ascending)
// 測試 ok
ls -lcr --sort=t | head -100 | awk 'NR!=1{print $8}' | sudo xargs -n 1 rm -f
Some awk simple command
# more /etc/ifconfig.rtk0 | awk '{print $1}'
192.168.17.205
# gawk '{print $2}' /etc/ifconfig.rtk0
192.168.17.173
# cat /etc/ifconfig.rtk0 | gawk '{print $2}'
192.168.17.173
# more /etc/ifconfig.rtk0 | gawk '{print $2}'
192.168.17.173
定時顯示snortinline PID & time value
#!/bin/sh
while (true)
do
ps aux | grep snort_inline | grep -v grep | awk '{printf "%s ", $1}'; date | awk '{print $4}'
sleep 5;
done
結果
29205 11:28:59
29205 11:29:04
29205 11:29:09
29205 11:29:14
29205 11:29:19
29205 11:29:24
29205 11:29:29
29205 11:29:34
29205 11:29:39
29205 11:29:44
while [ 1 ]; do ps aux | grep snort | grep -v grep | awk '{print $1}'; sleep 20; done
標籤:
Linux Command
2008年10月20日 星期一
Boot linux in the single mode
Boot linux in the single mode
http://www.cyberciti.biz/faq/grub-boot-into-single-user-mode/
(2) Select the kernel
(3) Press the e key to edit the entry
(4) Select second line (the line starting with the word kernel)
(5) Press the e key to edit kernel entry so that you can append single user mode
(6) Append the letter S (or word Single) to the end of the (kernel) line
(7) Press ENTER key
(8) Now press the b key to boot the Linux kernel into single user mode
(9) When prompted give root password and you be allowed to login into single user mode.
http://www.cyberciti.biz/tips/howto-recovering-grub-boot-loader-password.html
In Centos 5.4, this method seems not work. 20100305 testing
http://www.cyberciti.biz/faq/grub-boot-into-single-user-mode/
(2) Select the kernel
(3) Press the e key to edit the entry
(4) Select second line (the line starting with the word kernel)
(5) Press the e key to edit kernel entry so that you can append single user mode
(6) Append the letter S (or word Single) to the end of the (kernel) line
(7) Press ENTER key
(8) Now press the b key to boot the Linux kernel into single user mode
(9) When prompted give root password and you be allowed to login into single user mode.
http://www.cyberciti.biz/tips/howto-recovering-grub-boot-loader-password.html
In Centos 5.4, this method seems not work. 20100305 testing
幸福原則
1. Free your heart from hate 心中無恨
2. Free your mind from worry 腦中無憂
3. Live simply 生活簡單
4. Give more 多些付出
5. Expect less 少些期許
2. Free your mind from worry 腦中無憂
3. Live simply 生活簡單
4. Give more 多些付出
5. Expect less 少些期許
2008年10月15日 星期三
crontab format
# +---------------- minute (0 - 59)
# | +------------- hour (0 - 23)
# | | +---------- day of month (1 - 31)
# | | | +------- month (1 - 12)
# | | | | +---- day of week (0 - 6) (Sunday=0 or 7)
# | | | | |
* * * * * command to be executed
// List the crontab settings
# crontab -l
// Edit the crontab settings
# crontab -e
# For example, every 5 minutes to run the /root/bin/prog1
*/5 * * * * /root/bin/prog1
# every Monday 06:00 AM backup the files
0 6 * * 1 /usr/local/sbin/backup.sh
# Each two days period on 06:00 AM backup the files
0 6 */2 * * /usr/local/sbin/cvsbackup.sh
# | +------------- hour (0 - 23)
# | | +---------- day of month (1 - 31)
# | | | +------- month (1 - 12)
# | | | | +---- day of week (0 - 6) (Sunday=0 or 7)
# | | | | |
* * * * * command to be executed
// List the crontab settings
# crontab -l
// Edit the crontab settings
# crontab -e
# For example, every 5 minutes to run the /root/bin/prog1
*/5 * * * * /root/bin/prog1
# every Monday 06:00 AM backup the files
0 6 * * 1 /usr/local/sbin/backup.sh
# Each two days period on 06:00 AM backup the files
0 6 */2 * * /usr/local/sbin/cvsbackup.sh
Common useful mysql commands
use root account to login to database "mysql"
# mysql -u root mysql
Show all fields of a selected table
mysql> desc user;
mysql> select User,Host from mysql.user;
Some on-line help
mysql> help contents;
You asked for help about help category: "Contents"
For more information, type 'help- ', where
- is one of the following
categories:
Account Management
Administration
Data Definition
Data Manipulation
Data Types
Functions
Functions and Modifiers for Use with GROUP BY
Geographic Features
Language Structure
Storage Engines
Stored Routines
Table Maintenance
Transactions
Triggers
mysql> Update user SET Insert_priv='y',Update_priv='y' where user='openser'; Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
# Before update the value. Show the record value of specified field in a indicated record (user)
mysql> Select User,Select_priv,Insert_priv,Update_priv,Delete_priv,Create_priv,Drop_priv,Reload_priv From user WHERE user='openser';
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
| User | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv |
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
| openser | Y | Y | Y | N | N | N | N |
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
1 row in set (0.00 sec)
# Update the value of specified field in a indicated record (user)
mysql> Update user SET Delete_priv='y',Create_priv='y',Drop_priv='y',Reload_priv='y' where user='openser'; Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
# After update the value. Show the record value of specified field in a indicated record (user)
mysql> Select User,Select_priv,Insert_priv,Update_priv,Delete_priv,Create_priv,Drop_priv,Reload_priv From user WHERE user='openser';
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
| User | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv |
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
| openser | Y | Y | Y | Y | Y | Y | Y |
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
1 row in set (0.00 sec)
# mysql -u root mysql
Show all fields of a selected table
mysql> desc user;
mysql> select User,Host from mysql.user;
Some on-line help
mysql> help contents;
You asked for help about help category: "Contents"
For more information, type 'help
categories:
Account Management
Administration
Data Definition
Data Manipulation
Data Types
Functions
Functions and Modifiers for Use with GROUP BY
Geographic Features
Language Structure
Storage Engines
Stored Routines
Table Maintenance
Transactions
Triggers
mysql> Update user SET Insert_priv='y',Update_priv='y' where user='openser'; Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
# Before update the value. Show the record value of specified field in a indicated record (user)
mysql> Select User,Select_priv,Insert_priv,Update_priv,Delete_priv,Create_priv,Drop_priv,Reload_priv From user WHERE user='openser';
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
| User | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv |
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
| openser | Y | Y | Y | N | N | N | N |
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
1 row in set (0.00 sec)
# Update the value of specified field in a indicated record (user)
mysql> Update user SET Delete_priv='y',Create_priv='y',Drop_priv='y',Reload_priv='y' where user='openser'; Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
# After update the value. Show the record value of specified field in a indicated record (user)
mysql> Select User,Select_priv,Insert_priv,Update_priv,Delete_priv,Create_priv,Drop_priv,Reload_priv From user WHERE user='openser';
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
| User | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv |
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
| openser | Y | Y | Y | Y | Y | Y | Y |
+---------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+
1 row in set (0.00 sec)
Linux Command - sed
Example1.
#/etc/links1 檔案內容
WAN1=rtk0
WAN2=rtk1
LAN1=rtk2
DMZ1=rtk3
DMZ2=rtk4
WAN3=rtk5
# sed -n '3p' /etc/links1
LAN1=rtk2 -->只印出第3行
# sed '5p' /etc/links1
WAN1=rtk0
WAN2=rtk1
LAN1=rtk2
DMZ1=rtk3
DMZ2=rtk4
DMZ2=rtk4 --> 印出第5行
WAN3=rtk5
./Orig file
/usr/home/vincent/EP/target/vendors/D-Link/DFL-1500/www/help/help_files/ad_bm.html
/usr/home/vincent/EP/target/vendors/D-Link/DFL-1500/www/help/help_files/ad_bm_action.html
// 經過執行sed 之後
sed 's/EP\/target\/vendors\/D-Link\/DFL-1500\/www\/help/Help\/Current/g' ./Orig > ./Comp
./Comp file
/usr/home/vincent/Help/Current/help_files/ad_bm.html
/usr/home/vincent/Help/Current/help_files/ad_bm_action.html
研究如何將 "所有的檔案路徑移除,只留下檔案名稱"
Enterprise$ cat x
/abc/123
/abc/der/123
/usr/home/vincent/Help/Current/help_files/ad_bm_action.html
# sed 's/\/([a-zA-Z0-9_]* \ / )*//g' ./x ==> 目前利用這個方法無法work
Enterprise$ sed 's/\/.*\///g' ./x ==> 這個方法是work的,但是太簡單了,需要再想過更好的方法:)
123
123
ad_bm_action.html
.config
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.10
# Tue Nov 29 11:18:26 2005
#
CONFIG_X86=y
CONFIG_MMU=y
CONFIG_UID16=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_IOMAP=y
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
CONFIG_CLEAN_COMPILE=y
CONFIG_BROKEN_ON_SMP=y
# sed -ne 's/^\([A-Z0-9_]*\)=\(.*\)$/#define \1 \2/p' < .config > ./output
output
#define CONFIG_X86 y
#define CONFIG_MMU y
#define CONFIG_UID16 y
#define CONFIG_GENERIC_ISA_DMA y
#define CONFIG_GENERIC_IOMAP y
[略]
The separate token can be either ; or /
# echo --datadir=/var/lib/mysql | sed -e 's;--datadir=;;'
/var/lib/mysql
# echo --datadir=/var/lib/mysql | sed -e 's/--datadir=//'
/var/lib/mysql
We can use sign "#" as the separater, the example is a snapshot of the kamailio Makefile install target
# sed -e "s#/usr/.*lib/kamailio/modules/#/usr/local/lib/kamailio/modules/#g" < etc/kamailio.cfg
# make original file as the filename add suffix name ".old"
sed -i.old -e 's%ftp://ftp.gnu.org/gnu/gcc/releases/gcc-%http://ftp.gnu.org/gnu/gcc/gcc-%' -e 's/gdb //' make/gcc-uclibc-3.3.mk
#/etc/links1 檔案內容
WAN1=rtk0
WAN2=rtk1
LAN1=rtk2
DMZ1=rtk3
DMZ2=rtk4
WAN3=rtk5
# sed -n '3p' /etc/links1
LAN1=rtk2 -->只印出第3行
# sed '5p' /etc/links1
WAN1=rtk0
WAN2=rtk1
LAN1=rtk2
DMZ1=rtk3
DMZ2=rtk4
DMZ2=rtk4 --> 印出第5行
WAN3=rtk5
./Orig file
/usr/home/vincent/EP/target/vendors/D-Link/DFL-1500/www/help/help_files/ad_bm.html
/usr/home/vincent/EP/target/vendors/D-Link/DFL-1500/www/help/help_files/ad_bm_action.html
// 經過執行sed 之後
sed 's/EP\/target\/vendors\/D-Link\/DFL-1500\/www\/help/Help\/Current/g' ./Orig > ./Comp
./Comp file
/usr/home/vincent/Help/Current/help_files/ad_bm.html
/usr/home/vincent/Help/Current/help_files/ad_bm_action.html
研究如何將 "所有的檔案路徑移除,只留下檔案名稱"
Enterprise$ cat x
/abc/123
/abc/der/123
/usr/home/vincent/Help/Current/help_files/ad_bm_action.html
# sed 's/\/([a-zA-Z0-9_]* \ / )*//g' ./x ==> 目前利用這個方法無法work
Enterprise$ sed 's/\/.*\///g' ./x ==> 這個方法是work的,但是太簡單了,需要再想過更好的方法:)
123
123
ad_bm_action.html
.config
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.10
# Tue Nov 29 11:18:26 2005
#
CONFIG_X86=y
CONFIG_MMU=y
CONFIG_UID16=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_IOMAP=y
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
CONFIG_CLEAN_COMPILE=y
CONFIG_BROKEN_ON_SMP=y
# sed -ne 's/^\([A-Z0-9_]*\)=\(.*\)$/#define \1 \2/p' < .config > ./output
output
#define CONFIG_X86 y
#define CONFIG_MMU y
#define CONFIG_UID16 y
#define CONFIG_GENERIC_ISA_DMA y
#define CONFIG_GENERIC_IOMAP y
[略]
The separate token can be either ; or /
# echo --datadir=/var/lib/mysql | sed -e 's;--datadir=;;'
/var/lib/mysql
# echo --datadir=/var/lib/mysql | sed -e 's/--datadir=//'
/var/lib/mysql
We can use sign "#" as the separater, the example is a snapshot of the kamailio Makefile install target
# sed -e "s#/usr/.*lib/kamailio/modules/#/usr/local/lib/kamailio/modules/#g" < etc/kamailio.cfg
# make original file as the filename add suffix name ".old"
sed -i.old -e 's%ftp://ftp.gnu.org/gnu/gcc/releases/gcc-%http://ftp.gnu.org/gnu/gcc/gcc-%' -e 's/gdb //' make/gcc-uclibc-3.3.mk
2008年10月14日 星期二
Linux Command - Some useful tools
Show the file type ?
# type ulimit
ulimit is a shell builtin
# type vi
vi is aliased to `vim'
# type mysql
mysql is /usr/bin/mysql
# type mysqld_safe
mysqld_safe is /usr/bin/mysqld_safe
Question:
The "type" command is similar to the "file" command. What is the difference?
Answer:
The "file" command is used to determine the file type of a specified file.
And "type" command is used to determine the type of executable program. And "type" will also point the entire path of the specified program in the current file system.
# type ulimit
ulimit is a shell builtin
# type vi
vi is aliased to `vim'
# type mysql
mysql is /usr/bin/mysql
# type mysqld_safe
mysqld_safe is /usr/bin/mysqld_safe
Question:
The "type" command is similar to the "file" command. What is the difference?
Answer:
The "file" command is used to determine the file type of a specified file.
And "type" command is used to determine the type of executable program. And "type" will also point the entire path of the specified program in the current file system.
標籤:
Linux Command
Setup mysql
http://dev.mysql.com/doc/refman/5.0/en/access-denied.html
1. setup grant table for system to do the access control, run the following command to create the table(in the mysql folder) which will located in the the mysql installation directory (e.g. /var/lob/mysql)
/usr/bin/mysql_install_db
/usr/local/bin/mysql_install_db --user=mysql
2. run the mysqld server, main starting script of mysql
/usr/bin/mysqld_safe &
3. ps aux | grep mysql
mysql 7634 0.7 0.8 135872 17516 pts/11 Sl 14:11 0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
Generally, mysqld is statred with the following arguments
/usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
4. Stop the mysqld
Either use kill to terminate the mysql or use mysqladmin
# /usr/bin/mysqladmin -u root shutdown
5. Trouble shooting
read the error log of mysql "/var/log/mysqld.log"
5.1. We must create the user "mysql" manually (Doesn't mysql offer standard procedure to create the "mysql" user)
# useradd -m mysql
5.2. Refer to the following information, mysqld need to setup the process id but the directory is not yet created. So we need to create it manually and set its user/group as mysql.
[ERROR] /usr/local/libexec/mysqld: Can't create/write to file '/var/run/mysqld/mysqld.pid' (Errcode: 2)
081117 10:24:23 [ERROR] Can't start server: can't create PID file: No such file or directory
# mkdir /var/run/mysqld
chown mysql:mysql /var/run/mysqld/
default path of mysql file
the path of server daemon
==> /usr/libexec/mysqld
config file
==> /etc/my.cnf
default error log
==> err_log=/var/log/mysqld.log
default pid file
==> pid_file=/var/run/mysqld/mysqld.pid
data directory (mysql database repository?)
DATADIR=/var/lib/mysql
mysql use unix domain socket? so the specified unix socket file is in the following path..(just guesting.)
mysql_unix_port=/var/lib/mysql/mysql.sock
inside the mysqld_safe
default directory
MY_BASEDIR_VERSION=/usr
ledir=/usr/libexec
DATADIR=/var/lib/mysql
MYSQL_HOME=/usr
The my_print_defaults is only used in the mysql project?
man my_print_defaults
my_print_defaults - display options from option files
/usr/bin/my_print_defaults --loose-verbose mysqld server
One question, I can't kill the mysqld process. Either using kill -9 or kill -0. It is still alive. Just the process id is change only (since the process is killed but restored by the other mysql daemon)
# ps auxwww | grep mysql
mysql 10454 0.8 0.8 135872 17516 pts/11 Sl 17:35 0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
# killall -9 mysqld
# ps auxwww | grep mysql
mysql 10482 0.7 0.8 135872 17516 pts/11 Sl 17:36 0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
1. setup grant table for system to do the access control, run the following command to create the table(in the mysql folder) which will located in the the mysql installation directory (e.g. /var/lob/mysql)
/usr/bin/mysql_install_db
/usr/local/bin/mysql_install_db --user=mysql
2. run the mysqld server, main starting script of mysql
/usr/bin/mysqld_safe &
3. ps aux | grep mysql
mysql 7634 0.7 0.8 135872 17516 pts/11 Sl 14:11 0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
Generally, mysqld is statred with the following arguments
/usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
4. Stop the mysqld
Either use kill to terminate the mysql or use mysqladmin
# /usr/bin/mysqladmin -u root shutdown
5. Trouble shooting
read the error log of mysql "/var/log/mysqld.log"
5.1. We must create the user "mysql" manually (Doesn't mysql offer standard procedure to create the "mysql" user)
# useradd -m mysql
5.2. Refer to the following information, mysqld need to setup the process id but the directory is not yet created. So we need to create it manually and set its user/group as mysql.
[ERROR] /usr/local/libexec/mysqld: Can't create/write to file '/var/run/mysqld/mysqld.pid' (Errcode: 2)
081117 10:24:23 [ERROR] Can't start server: can't create PID file: No such file or directory
# mkdir /var/run/mysqld
chown mysql:mysql /var/run/mysqld/
default path of mysql file
the path of server daemon
==> /usr/libexec/mysqld
config file
==> /etc/my.cnf
default error log
==> err_log=/var/log/mysqld.log
default pid file
==> pid_file=/var/run/mysqld/mysqld.pid
data directory (mysql database repository?)
DATADIR=/var/lib/mysql
mysql use unix domain socket? so the specified unix socket file is in the following path..(just guesting.)
mysql_unix_port=/var/lib/mysql/mysql.sock
inside the mysqld_safe
default directory
MY_BASEDIR_VERSION=/usr
ledir=/usr/libexec
DATADIR=/var/lib/mysql
MYSQL_HOME=/usr
The my_print_defaults is only used in the mysql project?
man my_print_defaults
my_print_defaults - display options from option files
/usr/bin/my_print_defaults --loose-verbose mysqld server
One question, I can't kill the mysqld process. Either using kill -9 or kill -0. It is still alive. Just the process id is change only (since the process is killed but restored by the other mysql daemon)
# ps auxwww | grep mysql
mysql 10454 0.8 0.8 135872 17516 pts/11 Sl 17:35 0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
# killall -9 mysqld
# ps auxwww | grep mysql
mysql 10482 0.7 0.8 135872 17516 pts/11 Sl 17:36 0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
2008年10月12日 星期日
Configure kamailio experience
First you need to setup the mysql if you want to use the db_mysql module
Setup mysql via yum
# yum install mysql.i386 ==> include mysql client software
# yum install mysql-server.i386 ==> include mysqld mysqld_safe.. server program
We need to create the mysql related tables. Before we create the mysql table. We need to specify the db-engine type in the kamctlrc config file.
/usr/local/etc/kamailio/kamctlrc
DBENGINE=MYSQL
Then use the kamailio database script(kamdbctl) to create the related table
/usr/sbin/kamdbctl create
Run the kamailio program
./kamailio -D -ddddddddd
How to make kamailio TLS work?
* First you must compile the kamailio-1.4.0-tls_src.tar.gz source tarball with the TLS=1 set in the Makefile
* Second you must make sure the "fork = yes" in the kamailio config file
Refer to the document in the TLS tutorial
http://www.kamailio.org/docs/tls-devel.html#TLS-EXAMPLE
1.7. OpenSER with TLS - script example
IMPORTANT: The TLS support is based on TCP, and for allowing OpenSER to use TCP, it must be started in multi-process mode. So, there is a must to have the "fork" parameter set to "yes":
NOTE: Since the TLS engine is quite memory consuming, increase the used memory by the run time parameter "-m" (see OpenSER -h for more details).
* fork = yes ==> The most important part
Setup mysql via yum
# yum install mysql.i386 ==> include mysql client software
# yum install mysql-server.i386 ==> include mysqld mysqld_safe.. server program
We need to create the mysql related tables. Before we create the mysql table. We need to specify the db-engine type in the kamctlrc config file.
/usr/local/etc/kamailio/kamctlrc
DBENGINE=MYSQL
Then use the kamailio database script(kamdbctl) to create the related table
/usr/sbin/kamdbctl create
Run the kamailio program
./kamailio -D -ddddddddd
How to make kamailio TLS work?
* First you must compile the kamailio-1.4.0-tls_src.tar.gz source tarball with the TLS=1 set in the Makefile
* Second you must make sure the "fork = yes" in the kamailio config file
Refer to the document in the TLS tutorial
http://www.kamailio.org/docs/tls-devel.html#TLS-EXAMPLE
1.7. OpenSER with TLS - script example
IMPORTANT: The TLS support is based on TCP, and for allowing OpenSER to use TCP, it must be started in multi-process mode. So, there is a must to have the "fork" parameter set to "yes":
NOTE: Since the TLS engine is quite memory consuming, increase the used memory by the run time parameter "-m" (see OpenSER -h for more details).
* fork = yes ==> The most important part
Configure bugzilla experience
利用checksetup.pl 設定 bugzilla root e-mail address, 盡可能設定reachable e-mail address
開設帳號, 可以利用user 自行login 來create new account 的方式 (New Account), bugzilla 發bug 之後, 會自動送出e-mail 到負責bug的e-mail 信箱中
必要的設定,
Parameters-->
Required Settings -->
User Authentication -->
Email ==> Can't find this option :(
Users use bugzilla system need to register account first. Or the bugzilla will reject any accessing actions for the not authenticated user.
Parameters-->
Required Settings -->
User Authentication -->
requirelogin (Set on, original off)
Only the bugzilla administrator can create user account. Disable the user self register the bugzilla system.
Parameters-->
Required Settings -->
User Authentication -->
createemailregexp (set "" blank, original .* allow all e-mails)
If we disable the feature of user self creating account, then the bugzilla administrator must create user account for the new coming user.
The original descriptions of the bugzilla system for the "createemailregexp" field
This defines the regexp to use for email addresses that are permitted to self-register using a 'New Account' feature. The default (.*) permits any account matching the emailregexp to be created. If this parameter is left blank, no users will be permitted to create their own accounts and all accounts will have to be created by an administrator.
開設帳號, 可以利用user 自行login 來create new account 的方式 (New Account), bugzilla 發bug 之後, 會自動送出e-mail 到負責bug的e-mail 信箱中
必要的設定,
Parameters-->
Required Settings -->
User Authentication -->
Email ==> Can't find this option :(
Users use bugzilla system need to register account first. Or the bugzilla will reject any accessing actions for the not authenticated user.
Parameters-->
Required Settings -->
User Authentication -->
requirelogin (Set on, original off)
Only the bugzilla administrator can create user account. Disable the user self register the bugzilla system.
Parameters-->
Required Settings -->
User Authentication -->
createemailregexp (set "" blank, original .* allow all e-mails)
If we disable the feature of user self creating account, then the bugzilla administrator must create user account for the new coming user.
The original descriptions of the bugzilla system for the "createemailregexp" field
This defines the regexp to use for email addresses that are permitted to self-register using a 'New Account' feature. The default (.*) permits any account matching the emailregexp to be created. If this parameter is left blank, no users will be permitted to create their own accounts and all accounts will have to be created by an administrator.
2008年10月8日 星期三
Linux Command - tcpdump/ethereal/wireshark
show the packets with specified length
# tcpdump -nvi eth0 port ssh and greater 1500
//tcpdump filter which tcp header src port isn't 3389 and destination port isn't 3389 too.
tcpdump -n -N -i eth1 tcp[0:2] != 3389 and tcp[2:2] != 3389
tcpdump -n -N -i eth1 tcp[0:2] = 445 or tcp[2:2] = 445
tcpdump -n -N -i eth1 'tcp[0:2]!=161 and tcp[2:2]!=161 and tcp[0:2]!=445 and tcp[2:2]!=445 and tcp[0:2]!=139 and tcp[2:2]!=139'
//下面兩個例子都是要求tcpdump去capture src/dest port > 1024 並且不等於3389port的connection
tcpdump -n -N -i eth1 'tcp[0:2] & 0xfc00!=0 and tcp[2:2] & 0xfc00!=0 and tcp[0:2]!=3389 and tcp[2:2]!=3389'
tcpdump -n -N -i eth1 'tcp[0:2]>1024 and tcp[2:2]>1024 and tcp[0:2]!=3389 and tcp[2:2]!=3389'
將tcpdump 的內容寫到檔案,以供tcpreplay 或 ethereal 來寫
tcpdump -n -i br0 ip host 170.116.11.94 and port 80 -w file
看某一個特定subnet
tcpdump -n -i br0 ip net 192.168.200.0/24
TCP Flags
URG ACK PSH RST SYN FIN
## tcp[13]==17 --> FIN, ACK
## tcp[13]==2 --> SYN
## tcp[13]==18 --> SYN, ACK
## tcp[13]==4 --> RST
# tcpdump -n -i br0 host 192.168.200.1 and tcp[13]==17 or tcp[13]=2
將tcpdump 的內容寫到檔案,以供tcpreplay 或 ethereal 來寫
tcpdump -n -i br0 ip host 170.116.11.94 and port 80 -w file
關於tcpdump 抓檔時, 所設定的-s snaplength 過小時, tcpreplay 在replay traffic 時, 會有的處理方式
tcpdump -s snaplen
tcpreplay -u
can be
pad -- pad the end of the packet with zeros
trunc -- re-adjusting the length in the IP header
-u or untruncate
When a packet is truncated in the capture file because the snaplen was too small, this option will pad the end of the packet with zeros, or truncate (trunc) it by re-adjusting the length in the IP header. The trunc option will only alter IPv4 packets, all others will be sent unmodified.
tcpreplay -R
tcpreplay -r, ex.
# tcpreplay -r 30
tcpreplay -m 0.1
Some early experience
//使用前須先執行,經過測試好像不用執行這行指令也可以work的樣子,不是很確定
//將eth0 interface設定為promisc mode,亦即網路上所有流經本地端的封包,都會被parsing
#ifconfig eth0 promisc
//將interface eth的promisc mode disable
#ifconfig eth0 -promisc
//不要使用promiscuous mode的情形下來節取packet,尚在測試中,不是很確定
#tcpdump -p
//dump source ip address 為 143.158.11.94的packet
#tcpdump src host 143.158.11.94
//dump source ip address & destination ipaddress為143.158.11.96的packet
#tcpdump host 143.158.11.96
//dump source port & destination port為80的packet
#tcpdump port 80
//dump destination network為143.158.11.0/24 的packet
# tcpdump dst net 143.158.11.0/24
# tcpdump -nvi eth2 ip net 172.16.0.0/16
>> dump a network wioth specified netmask
#./tcpdump -i wan1 -s 0 net 198.145.245.0 mask 255.255.255.0 -w aaa.pcap
>> or like this format
#./tcpdump -i wan1 -s 0 net 198.145.245.0/24 -w aaa.pcap
promiscuous mode 是指 ethernet card 接收全部 packet 的一種模式,正常的情況, ethernet card 是用 mode 3.
ethernet card 的receive mode 有:
01h turn off receiver
02h receive only packes sent to this interface
03h mode 2 plus broadcast packets
04h mode 3 plus limited multicast packets
05h mode 3 plus all multicast packets
06h all packets(promiscuous mode)
07h raw mode for serial line only(v1.10+)
你可能執行類似 tcpdump 的網路監聽程式, 才會有上述的 error mesg
網卡可以置一種模式叫混雜模式(promiscuous),在這種模式下工作的網卡能夠接收到一切通過它的數據,而不管實際上數 據的目的地址是不是他。這實際上就是我們SNIFF工作的基本原理:讓網卡接收一切他所能接收的數據。
Q. How to set network card in the promiscuous mode?
A.
# ifconfig eth0 promisc
// Be sure to replace "eth0" with your own network interface in case it's "wlan0" or something else.
// To remove promiscuous mode, type:
# ifconfig eth0 -promisc
ethereal/wireshark Analyzes => Display Filters can input the filtering rule to search the specified packets
e.g.
* tcp.flags.reset == 1
* tcp.len >= 1500
* frame.number==24333 (each packet is called a frame in the ethereal/wireshark, so we can use frame as the display filter component
* tcp.analysis.flags (use wireshark tcp analysis result)
* tcp.analysis.lost_segment
* tcp.analysis.retransmission
* tcp.analysis.fast_retransmission
* cdp.checksum_bad==1 || edp.checksum_bad==1 || ip.checksum_bad==1 || tcp.checksum_bad==1 || udp.checksum_bad==1
Advanced wireshark search tips
include:
Display filter:
e.g.
frame contains fe:3d:dd:36
or other display filtering rules
Hex value:
fe:3d:dd:36 or fe3ddd36
String:
"babala" (guessing usage...)
ipv6 filtering rules in the wireshark
icmpv6
ipv6.src
ipv6.dst
ipv6.addr == ff02::1:2
We can setup the coloring rule in the wireshark to separate the different packets type in the following method.
Select View -> Coloring Rules
Setup the coloring rules to display different color according the filtering rules.
wireshark will sniff all the packets destinated the Network Card buffer. And print the packets in the raw socket format.
We can disable the TCP checksum verification by the following steps:
1. Select Edit->Preferences
2. Select protocols -> TCP from the left frame of current window
3. Disable the option of "Validate the TCP checksum if possible"
Then the checksum error of TCP packets will be ignored by the wireshark.
TCP principle
* TCP often send an ACK packet while received a data packet, the acknowlegement number is the received packet sequence number plus received packet payload length.
* Sometimes TCP will send an Cumulative ACK if a preACK is not yet sent out because the data of sender is quickly enough (often less than 500 ms)
(According the RFC documents: TCP ACK generation[RFC 1122, RFC 2581])
* TCP will send a duplicated-ACK while it receive a miss-order packet (miss one packet and receive the afterward packet, wireshark call "TCP Previous segment lost").
* After receive a out-of-order packet (missing the previous packets, wireshark label "TCP Previous segment lost"), the wireshark will label all the following received packet "TCP Retransmission" until the packet sequence of previous received out-of-order packet.
* In wireshark, after receiving a packet labeled " TCP Previous segment lost", packet "MPacket". And then immediately (<0.001) TCP Out-Of-Order" and not call it "TCP Retransmission". Because just invert these two packet receiving time, the tcp sequence order will be correct.
Some different tcp analysis in the wireshark
Dup ACK (Due to miss a packet of a next sequence number, this is labeled a "TCP Previous segment lost" in the wireshark)
Retransmission (Due to happen the specified RTT timeout and no ack received)
Fast Retransmission (Due to receive 3 ack packet with the same acknowledgement number) TCP Previous segment lost (Due to miss a packet which should in the next order, but receive a wrong order packet)
Refer the Fast Retransmission description as following:
http://en.wikipedia.org/wiki/Fast_retransmit
Fast Retransmit is an enhancement to TCP which reduces the time a sender waits before retransmitting a lost segment.
A TCP sender uses timers to recognize lost segments. If an acknowledgement is not received for a particular segment within a specified time (a function of the estimated Round-trip delay time), the sender will assume the segment was lost in the network, and will retransmit the segment.
The fast retransmit enhancement works as follows: if a TCP sender receives three duplicate acknowledgements with the same acknowledge number (that is, a total of four acknowledgements with the same acknowledgement number), the sender can be reasonably confident that the segment with the next higher sequence number was dropped, and will not arrive out of order. The sender will then retransmit the packet that was presumed dropped before waiting for its timeout.
# tcpdump -nvi eth0 port ssh and greater 1500
//tcpdump filter which tcp header src port isn't 3389 and destination port isn't 3389 too.
tcpdump -n -N -i eth1 tcp[0:2] != 3389 and tcp[2:2] != 3389
tcpdump -n -N -i eth1 tcp[0:2] = 445 or tcp[2:2] = 445
tcpdump -n -N -i eth1 'tcp[0:2]!=161 and tcp[2:2]!=161 and tcp[0:2]!=445 and tcp[2:2]!=445 and tcp[0:2]!=139 and tcp[2:2]!=139'
//下面兩個例子都是要求tcpdump去capture src/dest port > 1024 並且不等於3389port的connection
tcpdump -n -N -i eth1 'tcp[0:2] & 0xfc00!=0 and tcp[2:2] & 0xfc00!=0 and tcp[0:2]!=3389 and tcp[2:2]!=3389'
tcpdump -n -N -i eth1 'tcp[0:2]>1024 and tcp[2:2]>1024 and tcp[0:2]!=3389 and tcp[2:2]!=3389'
將tcpdump 的內容寫到檔案,以供tcpreplay 或 ethereal 來寫
tcpdump -n -i br0 ip host 170.116.11.94 and port 80 -w file
看某一個特定subnet
tcpdump -n -i br0 ip net 192.168.200.0/24
TCP Flags
URG ACK PSH RST SYN FIN
## tcp[13]==17 --> FIN, ACK
## tcp[13]==2 --> SYN
## tcp[13]==18 --> SYN, ACK
## tcp[13]==4 --> RST
# tcpdump -n -i br0 host 192.168.200.1 and tcp[13]==17 or tcp[13]=2
將tcpdump 的內容寫到檔案,以供tcpreplay 或 ethereal 來寫
tcpdump -n -i br0 ip host 170.116.11.94 and port 80 -w file
關於tcpdump 抓檔時, 所設定的-s snaplength 過小時, tcpreplay 在replay traffic 時, 會有的處理方式
tcpdump -s snaplen
tcpreplay -u
can be
pad -- pad the end of the packet with zeros
trunc -- re-adjusting the length in the IP header
-u or untruncate
When a packet is truncated in the capture file because the snaplen was too small, this option will pad the end of the packet with zeros, or truncate (trunc) it by re-adjusting the length in the IP header. The trunc option will only alter IPv4 packets, all others will be sent unmodified.
tcpreplay -R
tcpreplay -r
# tcpreplay -r 30
tcpreplay -m 0.1
Some early experience
//使用前須先執行,經過測試好像不用執行這行指令也可以work的樣子,不是很確定
//將eth0 interface設定為promisc mode,亦即網路上所有流經本地端的封包,都會被parsing
#ifconfig eth0 promisc
//將interface eth的promisc mode disable
#ifconfig eth0 -promisc
//不要使用promiscuous mode的情形下來節取packet,尚在測試中,不是很確定
#tcpdump -p
//dump source ip address 為 143.158.11.94的packet
#tcpdump src host 143.158.11.94
//dump source ip address & destination ipaddress為143.158.11.96的packet
#tcpdump host 143.158.11.96
//dump source port & destination port為80的packet
#tcpdump port 80
//dump destination network為143.158.11.0/24 的packet
# tcpdump dst net 143.158.11.0/24
# tcpdump -nvi eth2 ip net 172.16.0.0/16
>> dump a network wioth specified netmask
#./tcpdump -i wan1 -s 0 net 198.145.245.0 mask 255.255.255.0 -w aaa.pcap
>> or like this format
#./tcpdump -i wan1 -s 0 net 198.145.245.0/24 -w aaa.pcap
promiscuous mode 是指 ethernet card 接收全部 packet 的一種模式,正常的情況, ethernet card 是用 mode 3.
ethernet card 的receive mode 有:
01h turn off receiver
02h receive only packes sent to this interface
03h mode 2 plus broadcast packets
04h mode 3 plus limited multicast packets
05h mode 3 plus all multicast packets
06h all packets(promiscuous mode)
07h raw mode for serial line only(v1.10+)
你可能執行類似 tcpdump 的網路監聽程式, 才會有上述的 error mesg
網卡可以置一種模式叫混雜模式(promiscuous),在這種模式下工作的網卡能夠接收到一切通過它的數據,而不管實際上數 據的目的地址是不是他。這實際上就是我們SNIFF工作的基本原理:讓網卡接收一切他所能接收的數據。
Q. How to set network card in the promiscuous mode?
A.
# ifconfig eth0 promisc
// Be sure to replace "eth0" with your own network interface in case it's "wlan0" or something else.
// To remove promiscuous mode, type:
# ifconfig eth0 -promisc
ethereal/wireshark Analyzes => Display Filters can input the filtering rule to search the specified packets
e.g.
* tcp.flags.reset == 1
* tcp.len >= 1500
* frame.number==24333 (each packet is called a frame in the ethereal/wireshark, so we can use frame as the display filter component
* tcp.analysis.flags (use wireshark tcp analysis result)
* tcp.analysis.lost_segment
* tcp.analysis.retransmission
* tcp.analysis.fast_retransmission
* cdp.checksum_bad==1 || edp.checksum_bad==1 || ip.checksum_bad==1 || tcp.checksum_bad==1 || udp.checksum_bad==1
Advanced wireshark search tips
include:
Display filter:
e.g.
frame contains fe:3d:dd:36
or other display filtering rules
Hex value:
fe:3d:dd:36 or fe3ddd36
String:
"babala" (guessing usage...)
ipv6 filtering rules in the wireshark
icmpv6
ipv6.src
ipv6.dst
ipv6.addr == ff02::1:2
We can setup the coloring rule in the wireshark to separate the different packets type in the following method.
Select View -> Coloring Rules
Setup the coloring rules to display different color according the filtering rules.
wireshark will sniff all the packets destinated the Network Card buffer. And print the packets in the raw socket format.
We can disable the TCP checksum verification by the following steps:
1. Select Edit->Preferences
2. Select protocols -> TCP from the left frame of current window
3. Disable the option of "Validate the TCP checksum if possible"
Then the checksum error of TCP packets will be ignored by the wireshark.
TCP principle
* TCP often send an ACK packet while received a data packet, the acknowlegement number is the received packet sequence number plus received packet payload length.
* Sometimes TCP will send an Cumulative ACK if a preACK is not yet sent out because the data of sender is quickly enough (often less than 500 ms)
(According the RFC documents: TCP ACK generation[RFC 1122, RFC 2581])
* TCP will send a duplicated-ACK while it receive a miss-order packet (miss one packet and receive the afterward packet, wireshark call "TCP Previous segment lost").
* After receive a out-of-order packet (missing the previous packets, wireshark label "TCP Previous segment lost"), the wireshark will label all the following received packet "TCP Retransmission" until the packet sequence of previous received out-of-order packet.
* In wireshark, after receiving a packet labeled "
Dup ACK (Due to miss a packet of a next sequence number, this is labeled a "TCP Previous segment lost" in the wireshark)
Retransmission (Due to happen the specified RTT timeout and no ack received)
Fast Retransmission (Due to receive 3 ack packet with the same
Refer the Fast Retransmission description as following:
http://en.wikipedia.org/wiki/Fast_retransmit
Fast Retransmit is an enhancement to TCP which reduces the time a sender waits before retransmitting a lost segment.
A TCP sender uses timers to recognize lost segments. If an acknowledgement is not received for a particular segment within a specified time (a function of the estimated Round-trip delay time), the sender will assume the segment was lost in the network, and will retransmit the segment.
The fast retransmit enhancement works as follows: if a TCP sender receives three duplicate acknowledgements with the same acknowledge number (that is, a total of four acknowledgements with the same acknowledgement number), the sender can be reasonably confident that the segment with the next higher sequence number was dropped, and will not arrive out of order. The sender will then retransmit the packet that was presumed dropped before waiting for its timeout.
標籤:
Linux Command
2008年10月7日 星期二
Linux Command - vi, use Vi to delete ^M signature
My experience
在Unix/BSD中,要顯示
^M等符號的方法,
按住Ctrl + V + M就會產生" ^M "的符號
實際例子:
當從windows傳送檔案到Unix/BSD之後,
或是Unix互傳檔案時,沒有使用ascii mode,而使用binary mode時
文字檔的每行行末會出現 ^M 的符號,這樣很難看,希望將這個符號刪除。
利用vi的替代功能
將^M改為空白符號
:%s/(Ctrl + V +M)//g
:1,$s/(Ctrl + V +M)//g
不過在剛拿到這些scripts(經由E-Mail得到),發現其在DOM內用vi觀看時,會發現在每行的行尾都有一個^M符號,而在Linux Source下觀看則沒有,使得每次想run該script時,都必須先進vi,以人工的方式將這個符號刪除,但是由於資料是儲存在ramdisk中,故每次重新開機後,資料就會恢復原狀,這個問題困擾我滿久的。
因為剛開始就懷疑原因可能是UNIX和DOS在做文字檔轉換的時候,所多出來的符號,但是原本是朝向由vi內部去做符號搜尋替換的方向,或是由vi的特殊功能來改善,但是試過很多方法沒有效果。
不過現在發現這個工具”dos2unix”,在嘗試性的run過一遍後,竟然驚奇的發現原來的^M符號不見了,現在就將方法詳列於下
//指令格式dos2unix –n infile outfile
// If the dest file is new created file, you should add "-n" parameter
# dos2unix –n wanfo.html wanfo
// If the dest file is already existed, you should add "-o" parameter or just leave the parameter empty
# dos2unix -o wanfo.html
# dos2unix wanfo.html
原本wanfo.html內有^M符號,經過下面指令執行後,產生出來的wanfo,並沒有^M符號
some experiences of other advisor
http://newbiedoc.sourceforge.net/text_editing/vi.html#SEARCHING
在Unix/BSD中,要顯示
^M等符號的方法,
按住Ctrl + V + M就會產生" ^M "的符號
實際例子:
當從windows傳送檔案到Unix/BSD之後,
或是Unix互傳檔案時,沒有使用ascii mode,而使用binary mode時
文字檔的每行行末會出現 ^M 的符號,這樣很難看,希望將這個符號刪除。
利用vi的替代功能
將^M改為空白符號
:%s/(Ctrl + V +M)//g
:1,$s/(Ctrl + V +M)//g
不過在剛拿到這些scripts(經由E-Mail得到),發現其在DOM內用vi觀看時,會發現在每行的行尾都有一個^M符號,而在Linux Source下觀看則沒有,使得每次想run該script時,都必須先進vi,以人工的方式將這個符號刪除,但是由於資料是儲存在ramdisk中,故每次重新開機後,資料就會恢復原狀,這個問題困擾我滿久的。
因為剛開始就懷疑原因可能是UNIX和DOS在做文字檔轉換的時候,所多出來的符號,但是原本是朝向由vi內部去做符號搜尋替換的方向,或是由vi的特殊功能來改善,但是試過很多方法沒有效果。
不過現在發現這個工具”dos2unix”,在嘗試性的run過一遍後,竟然驚奇的發現原來的^M符號不見了,現在就將方法詳列於下
//指令格式dos2unix –n infile outfile
// If the dest file is new created file, you should add "-n" parameter
# dos2unix –n wanfo.html wanfo
// If the dest file is already existed, you should add "-o" parameter or just leave the parameter empty
# dos2unix -o wanfo.html
# dos2unix wanfo.html
原本wanfo.html內有^M符號,經過下面指令執行後,產生出來的wanfo,並沒有^M符號
some experiences of other advisor
http://newbiedoc.sourceforge.net/text_editing/vi.html#SEARCHING
Searching and Replacing
:line1,line2s/old_string/new_string/g |
\r\n = chr(13)chr(10) = MS-DOS
Our MSDOS text file should look like this:
Friday the 13th^M |
And our mac text file should look like that
"Friday the 13th^M^M^MDearSir,^M^M...." |
MS-DOS/Windows -> UNIX conversion:
In order to remove these ugly ^M, you search for them and replace them by....nothing!
So first, let's search for those weird ^M ... but, how can you search for character 'ENTER'?
:1,$s/^V^M// |
(where ^V is Control-V, and ^M is ENTER or Control-M)
note that VI doesn't display the ^V, so you'll only see
:1,$s/^M// |
標籤:
Linux Command
2008年10月6日 星期一
file manipulation experience
/* Change the permissions of all the sub files and folder
Remove the read/write/excute rights of group and others of all the files and subfolders of the specified folder
Using chmod command to excute
*/
# chmod -R go-r,go-w,go-x ./folder/
# chmod -R g-rwx,o-rwx ./folder2/
Remove the read/write/excute rights of group and others of all the files and subfolders of the specified folder
Using chmod command to excute
*/
# chmod -R go-r,go-w,go-x ./folder/
# chmod -R g-rwx,o-rwx ./folder2/
2008年10月2日 星期四
Socket experience
Server
socket
bind
listen
accept
close
Client
socket
connect
close
While client issue the connect, the physical packet sequence is like
1. SYN (client -> server)
2. SYN+ACK (server->client)
3. ACK
But I found that server will auto send the FIN wjile the client is trying to connect the server.
So the connection will closed immediately while the client is trying to connect the server.
4. FIN+ACK
5.ACK
Afterall the client will also send the FIN to close the connection.
6. FIN+ACK
7. ACK
The server will respond the error
Socket Error: Transport endpoint is not connected
When a program is terminated using the exit, it will send the reset to the connected socket.
socket
bind
listen
accept
close
Client
socket
connect
close
While client issue the connect, the physical packet sequence is like
1. SYN (client -> server)
2. SYN+ACK (server->client)
3. ACK
But I found that server will auto send the FIN wjile the client is trying to connect the server.
So the connection will closed immediately while the client is trying to connect the server.
4. FIN+ACK
5.ACK
Afterall the client will also send the FIN to close the connection.
6. FIN+ACK
7. ACK
The server will respond the error
Socket Error: Transport endpoint is not connected
When a program is terminated using the exit, it will send the reset to the connected socket.
2008年10月1日 星期三
Endians
All processors must be designated as either big endian or little endian.
Intel's 80x86 processors and their clones are little endian.
Sun's SPARC, Motorola's 68K, and the PowerPC families are all big endian. The Java Virtual Machine is big endian as well.
Intel 80x86 is little endian (host byte order). The least significant byte first stored in the memory. So the memory dump is from the least significant byte to the most significant byte.
The network byte order format is big endian. So we need to transfer the byte order if we want to transfer data from Intel 80x86 machine onto the network.
For example.
General speaking, the packet format of adrress/port is like following syntax
Appearing in the physical packet
Address: 8c61 12ab ==> 140.97.18.171
Port: c268 ==> 0xc268 == 49768 (big endian format, network byte order, from the most significant byte to the least significant byte)
Intel's 80x86 processors and their clones are little endian.
Sun's SPARC, Motorola's 68K, and the PowerPC families are all big endian. The Java Virtual Machine is big endian as well.
Intel 80x86 is little endian (host byte order). The least significant byte first stored in the memory. So the memory dump is from the least significant byte to the most significant byte.
The network byte order format is big endian. So we need to transfer the byte order if we want to transfer data from Intel 80x86 machine onto the network.
For example.
General speaking, the packet format of adrress/port is like following syntax
Appearing in the physical packet
Address: 8c61 12ab ==> 140.97.18.171
Port: c268 ==> 0xc268 == 49768 (big endian format, network byte order, from the most significant byte to the least significant byte)
訂閱:
文章 (Atom)