转:http://www.xprogrammer.com/159.html  程序猿

概述:NodeJS宣称其目标是“旨在提供一种简单的构建可伸缩网络程序的方法”,那么它的出现是为了解决什么问题呢,它有什么优缺点以及它适用于什么场景呢?

本文就个人使用经验对这些问题进行探讨。

一. NodeJS的特点

我们先来看看NodeJS官网上的介绍:

Node.js is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

其特点为:
1. 它是一个Javascript运行环境

2. 依赖于Chrome V8引擎进行代码解释

3. 事件驱动

4. 非阻塞I/O

5. 轻量、可伸缩,适于实时数据交互应用

6. 单进程,单线程

二. NodeJS带来的对系统瓶颈的解决方案

它的出现确实能为我们解决现实当中系统瓶颈提供了新的思路和方案,下面我们看看它能解决什么问题。

苹果于1月11日下发运营商更新,已经完美支持港行5S使用移动3G和4G不需要破解。

------------------------------------------

http://bbs.weiphone.com/read-htm-tid-7463171.html

 

12/30更新:根据私信我的朋友,请务必确定国行的已经换了4G卡,国行用户只能实现4G和4G的随意开关还有热点分享,请不要再问我出不出3G了,你们进行完我说的两步以后就能随意控制LTE的开启和关闭了。还有第二步结束后不能上网的,请按步骤来,我昨天的更新就说了不要直接替换,要把之前的删除后再粘贴。根据帖子下的FY回馈,大部分都是可以按我的教程成功的,有失败的请仔细按我的步骤来不要出错,步骤也就两步,很方便的。12/29 11:11更新补充:对于弄完后有LTE标志却没有网络的,这种情况是你没按步骤来(要删除子文件而不是图方便直接覆盖!)要4G网的请务必确定自己的是4G卡,还有认为解决名字不显示要装app的,这个app装完一键+86后就可卸载了,是一次性永久的,不是一直需要app存在的。
12/28更新补充:3G下也是可以进行facetime和热点开启的,这点大家放心。要4G一定要注意自己家那边有4G信号而且已经换了4G卡!!还有就是我是港版的,国行在理论上是可以破解4G并且可以随意控制4G的开关的(这点就解决了破解4G后的国行同学回不到E网的问题了)通话已经测试,非常完美,尤其是在3G网络下非常好,开启LTE后打电话会自动降到E网,但打完后会自动回跳到3G然后再回跳到4G。                                                                                                                                                       现在所有替换中最最完美的!没有之一!尤其是对于使用4G的同学,能完美4G。成功实现4G以及3G上网,关闭4G开关后默认3G(3G网络不好的情况下会自动切回E)其实相信使用惯了3G和4G的同学会根本受不了E网的慢速。
成功实现热点开启。
实现Facetime以及imessage。
不足:通话记录中不能显示姓名,解决方法就是下载一个“号码助手”的app进行添加国家码,如果提示你要开启通讯录,到设置--隐私--通讯录中打开即可。对于还有一种能解决短号号码显示问题的,可以参考http://bbs.weiphone.com/forum.ph ... page=1#pid108812633感谢作者!

原贴:http://joyleley.spaces.live.com/blog/cns!E162F872A7449BAC!158.entry

(一)参数优化.("telnet localhost 5000"后,输入"param.show"可以看到所有系统运行中的参数.输入"param.set thread_pools 8"可以调整参数.)
    thread_pools                  8 [pools]
thread_pool_max            2000 [threads]
thread_pool_min             100 [threads]
thread_pool_timeout        10 [seconds]
    #这四个参数要一起看.
    #thread_pools是系统sess进入处理的pools.理想的情况下是一个cpu一个pool,如果pools过多会消耗cpu时间和mem.但是,pools多一点,处理并发的能力会更强.
    #thread_pool_min是每个pools的最小threads数.当pools侦测到可处理sess后,就分配给所属的空余threads处理.
    #thread_pool_max是所有pools所属的threads总和数的上限值.这个值不要设置的太高,一般是系统期望峰值的90%.太高了会发生"pile-ups",不知道怎么翻译,是不是"拥挤"?
    #thread_pool_timeout是thread的过期时间.当threads数大于thread_pool_min的时候,thread的空闲超过thread_pool_timeout时间,thread就被释放.
    listen_depth               1024 [connections]     #tcp链接队列size.默认是512,适当调大一点,处理并发能力增强.
    lru_interval               3600 [seconds]
    #优雅时间参数(不知道是不是应该这么翻译),意思就是,如果一个object,在内存中超过了这个时间还没有被重用,则把这个对象移动到 LRU(Least Recently Used)队列中.一种普遍的cache算法.个人理解,提高这个时间,会减少object在内存中的copy,以提高运行效率.
(二)VCL优化.
    vcl_recv:      set req.grace = 30s;
    vcl_fetch:     set obj.grace = 30s;
(三)系统环境优化
    ulimit -HSn 131072
ulimit -HSc unlimited
(四)tcp/ip网络环境参数优化
    修改"/etc/sysctl.conf".(官网上说,这个配置可以支持4000-8000 req/s的压力.)
    net.ipv4.ip_local_port_range = 1024 65536
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216
net.ipv4.tcp_fin_timeout = 3
net.ipv4.tcp_tw_recycle = 1
net.core.netdev_max_backlog = 30000
net.ipv4.tcp_no_metrics_save=1
net.core.somaxconn = 262144
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2

    执行优化:
sysctl -p

转:http://blog.chinaunix.net/uid-9395696-id-3432841.html
modprobe ip_conntrack
echo "modprobe ip_conntrack" >> /etc/rc.local
sysctl.conf修改
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.ip_conntrack_max =655360
net.core.netdev_max_backlog = 8192
net.core.somaxconn = 4096
php.ini修改
default_socket_timeout = 120
memory_limit = 4096M
php-fpm修改
listen.backlog = 3072
pm.max_requests = 102400

参数根据自己服务器的性能调整

转自:http://haibo600.blog.51cto.com/blog/1951311/874603

1.worker_processes 越大越好(一定数量后性能增加不明显)
2.worker_cpu_affinity 所有cpu平分worker_processes 要比每个worker_processes 都跨cpu分配性能要好;不考虑php的执行,测试结果worker_processes数量是cpu核数的2倍性能最优
3.unix domain socket(共享内存的方式)要比tcp网络端口配置性能要好
不考虑backlog,请求速度有量级的飞跃,但错误率超过50%
加上backlog,性能有10%左右提升
4.调整nginx、php-fpm和内核的backlog(积压),connect() to unix:/tmp/php-fpm.socket failed (11: Resource temporarily unavailable) while connecting to upstream错误的返回会减少
nginx:
配置文件的server块
listen 80 default backlog=1024;
php-fpm:
配置文件的
listen.backlog = 2048
kernel参数:
/etc/sysctl.conf,不能低于上面的配置
net.ipv4.tcp_max_syn_backlog = 4096
net.core.netdev_max_backlog = 4096
5.增加单台服务器上的php-fpm的master实例,会增加fpm的处理能力,也能减少报错返回的几率
多实例启动方法,使用多个配置文件:
/usr/local/php/sbin/php-fpm -y /usr/local/php/etc/php-fpm.conf &
/usr/local/php/sbin/php-fpm -y /usr/local/php/etc/php-fpm1.conf &
nginx的fastcgi配置
    upstream phpbackend {
#      server   127.0.0.1:9000 weight=100 max_fails=10 fail_timeout=30;
#      server   127.0.0.1:9001 weight=100 max_fails=10 fail_timeout=30;
#      server   127.0.0.1:9002 weight=100 max_fails=10 fail_timeout=30;
#      server   127.0.0.1:9003 weight=100 max_fails=10 fail_timeout=30;
      server   unix:/var/www/php-fpm.sock weight=100 max_fails=10 fail_timeout=30;
      server   unix:/var/www/php-fpm1.sock weight=100 max_fails=10 fail_timeout=30;
      server   unix:/var/www/php-fpm2.sock weight=100 max_fails=10 fail_timeout=30;
      server   unix:/var/www/php-fpm3.sock weight=100 max_fails=10 fail_timeout=30;
#      server   unix:/var/www/php-fpm4.sock weight=100 max_fails=10 fail_timeout=30;
#      server   unix:/var/www/php-fpm5.sock weight=100 max_fails=10 fail_timeout=30;
#      server   unix:/var/www/php-fpm6.sock weight=100 max_fails=10 fail_timeout=30;
#      server   unix:/var/www/php-fpm7.sock weight=100 max_fails=10 fail_timeout=30;
    }
        location ~ \.php* {
            fastcgi_pass   phpbackend;
#           fastcgi_pass   unix:/var/www/php-fpm.sock;
            fastcgi_index index.php;
       ..........
       }
6.测试环境和结果
内存2G
swap2G
cpu 2核 Intel(R) Xeon(R) CPU E5405  @ 2.00GHz
采用ab远程访问测试,测试程序为php的字符串处理程序
1)在开4个php-fpm实例,nginx 8个worker_processes 每个cpu4个worker_processes ,backlog为1024,php的backlog为2048,内核backlog为4096,采用unix domain socket连接的情况下,其他保持参数不变
性能和错误率较为平衡,可接受,超过4个fpm实例,性能开始下降,错误率并没有明显下降
结论是fpm实例数,worker_processes数和cpu保持倍数关系,性能较高
影响性能和报错的参数为
php-fpm实例,nginx worker_processes数量,fpm的max_request,php的backlog,unix domain socket
10W请求,500并发无报错,1000并发报错率为0.9%
500并发:
Time taken for tests:   25 seconds avg.
Complete requests:      100000
Failed requests:        0
Write errors:           0
Requests per second:    4000 [#/sec] (mean) avg.
Time per request:       122.313 [ms] (mean)
Time per request:       0.245 [ms] (mean, across all concurrent requests)
Transfer rate:          800 [Kbytes/sec] received avg.
1000并发:
Time taken for tests:   25 seconds avg.
Complete requests:      100000
Failed requests:        524
   (Connect: 0, Length: 524, Exceptions: 0)
Write errors:           0
Non-2xx responses:      524
Requests per second:    3903.25 [#/sec] (mean)
Time per request:       256.197 [ms] (mean)
Time per request:       0.256 [ms] (mean, across all concurrent requests)
Transfer rate:          772.37 [Kbytes/sec] received
2)在其他参数不变,unix domain socket换为tcp网络端口连接,结果如下
500并发:
Concurrency Level:      500
Time taken for tests:   26.934431 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Requests per second:    3712.72 [#/sec] (mean)
Time per request:       134.672 [ms] (mean)
Time per request:       0.269 [ms] (mean, across all concurrent requests)
Transfer rate:          732.37 [Kbytes/sec] received
1000并发:
Concurrency Level:      1000
Time taken for tests:   28.385349 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Requests per second:    3522.94 [#/sec] (mean)
Time per request:       283.853 [ms] (mean)
Time per request:       0.284 [ms] (mean, across all concurrent requests)
Transfer rate:          694.94 [Kbytes/sec] received
与1)比较,有大约10%的性能下降
7. 5.16调整fpm的max_request参数为1000,并发1000报错返回降到200个以下,
Transfer rate在800左右

 

第2层隧道协议 (L2TP),是 VPN 隧道协议的一种,是 PPTP 的后续版本。L2TP 支持两端点间多隧道,但通常要由 IPSec 来提供加密和验证功能,可建立变动的客户端到固定服务器的连接。

一、安装 IPSec

IPSec 用于对 IP 数据包进行加密和验证,通常使用 Openswan 来实现 IPSec。

1.1、安装编译工具

1
apt-get -y install build-essential

1.2、安装 OpenSwan 依赖包

1
apt-get -y install libgmp3-dev flex bison

1.3、编译安装 OpenSwan

1
2
3
4
wget -c http://www.openswan.org/download/openswan-2.6.33.tar.gz
tar -zxf openswan-2.6.33.tar.gz
cd openswan-2.6.33
make programs install

1.4、编辑 IPSec 配置文件

注意将"192.168.1.102"换成服务器公网IP。

1
cp /etc/ipsec.conf /etc/ipsec.conf.old
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cat >/etc/ipsec.conf<<EOF
version 2.0

config setup
    nat_traversal=yes
    virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12
    oe=off
    protostack=netkey

conn L2TP-PSK-NAT
    rightsubnet=vhost:%priv
    also=L2TP-PSK-noNAT

conn L2TP-PSK-noNAT
    authby=secret
    pfs=no
    auto=add
    keyingtries=3
    rekey=no
    ikelifetime=8h
    keylife=1h
    type=transport
    left=192.168.1.102
    leftprotoport=17/1701
    right=%any
    rightprotoport=17/%any
EOF

1.5、设置 PSK 预共享密钥

注意将"192.168.1.102"换成服务器公网IP。将"123456"换成你自己的PSK。

1
2
3
cat >/etc/ipsec.secrets<<EOF
192.168.1.102 %any: PSK "123456"
EOF

1.6、调整网络策略

for 循环语句,请一行一行地输入,输完后按回车。

1
2
3
4
5
for each in /proc/sys/net/ipv4/conf/*
do
echo 0 > $each/accept_redirects
echo 0 > $each/send_redirects
done

1.7、重启IPSec 服务

1
2
/etc/init.d/ipsec restart
/usr/local/sbin/ipsec verify

可尝试使用 L2TP/IPSec 客户端连接一次,以测试 IPSec 部分配置是否成功。

1
cat /var/log/auth.log | grep pluto

如出现"IPsec SA established transport mode"则成功了。

二、安装 L2TP

使用 xl2tpd 来实现 L2TP,另外要注意的是 xl2tpd 需要从 rp-l2tp 中提取 l2tp-control。

2.1、提取 l2tp-control

1
2
3
4
5
6
7
8
wget http://nchc.dl.sourceforge.net/project/rp-l2tp/rp-l2tp/0.4/rp-l2tp-0.4.tar.gz
tar zxvf rp-l2tp-0.4.tar.gz
cd rp-l2tp-0.4
./configure
make
cp handlers/l2tp-control /usr/local/sbin/
mkdir /var/run/xl2tpd/
ln -s /usr/local/sbin/l2tp-control /var/run/xl2tpd/l2tp-control

2.2、编译安装 xl2tpd

1
apt-get -y install libpcap-dev #安装依赖包
1
2
3
4
5
wget -c http://www.xelerance.com/wp-content/uploads/software/xl2tpd/xl2tpd-1.2.8.tar.gz
tar -zxf xl2tpd-1.2.8.tar.gz
cd xl2tpd-1.2.8
make install
mkdir /etc/xl2tpd

2.3、编辑 xl2tpd 配置文件

ip range 是连接上来的客户端所获得的服务器端内网的 IPv4 地址段。
local ip 是 pppX 所占用的那个 IP 地址。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat >/etc/xl2tpd/xl2tpd.conf<<EOF
[global]
ipsec saref = yes

[lns default]
local ip = 10.10.11.1
ip range = 10.10.11.2-10.10.11.245
refuse chap = yes
refuse pap = yes
require authentication = yes
ppp debug = yes
pppoptfile = /etc/ppp/options.xl2tpd
length bit = yes
EOF

三、PPP 的安装配置

3.1、安装 ppp 包

1
apt-get -y install ppp

3.2、配置 options.xl2tpd

主要是改MS-DNS,其他默认。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat >/etc/ppp/options.xl2tpd<<EOF
require-mschap-v2
ms-dns 8.8.8.8
ms-dns 8.8.4.4
asyncmap 0
auth
crtscts
lock
hide-password
modem
debug
name l2tpd
proxyarp
lcp-echo-interval 30
lcp-echo-failure 4
EOF

四、添加 VPN 用户

chap-secrets 文件为4段,分别是:用户名、服务器名称、密码、分配给客户端的IP。
服务器名可以是l2tpd 或 pptpd,*号代表全部。
密码以明文填写,不需进行加密。
最后的*号代表从remoteip指定的IP段随机分配

1
2
3
cat >>/etc/ppp/chap-secrets<<EOF
user * 123456 *
EOF

五、配置数据包转发

否则连接VPN后,只能访问服务器资源,而不能访问这台服务器以外的资源。

1
2
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
sysctl -p

开启iptables转发

1
iptables -t nat -A POSTROUTING -j MASQUERADE

设置MTU

1
iptables -I FORWARD -p tcp --syn -i ppp+ -j TCPMSS --set-mss 1356

六、启动 xl2tpd 服务

1
2
/usr/local/sbin/xl2tpd
/usr/local/sbin/xl2tpd -D #以调式模式启动

参考资料:

1. http://www.linuxhomenetworking.com/wiki/index.php
2. http://www.linode.com/wiki/index.php/AndroidL2TPPSKServer
3. https://humou.net/blog/201102061326.html
4. http://b.gkp.cc/2010/06/19/setup-ipsec-l2tp-on-centos-55/
5. http://apple4.us/2010/05/setting-up-l2tp-vpn-on-debian-ubuntu.html

原文地址 : http://wangyan.org/blog/debian-l2tp-ipsec-vpn.html
本站遵循 : 知识共享署名-非商业性使用-相同方式共享 3.0 版权协议
版权声明 : 原创文章转载时,请务必以超链接形式标明 文章原始出处

Apache的Mod_rewrite学习(五)

今天主要列出一些例子。由于有些例子是针对特殊路径或特别情况的,列出供大家在思路上参考。因为它们就是些例子。

目标 重写设置 说明
规范化URL RewriteRule ^/~([^/>+)/?(.*) /u/$1/$2 [R] 将/~user重写为/u/user的形式
RewriteRule ^/([uge])/([^/>+)$ /$1/$2/ [R] 将/u/user末尾漏掉的/补上
规范化HostName RewriteCond %{HTTP_HOST} !^fully\.qualified\.domain\.name [NC] 域名不合格
RewriteCond %{HTTP_HOST} !^$ 不空
RewriteCond %{SERVER_PORT} !^80$ 不是80端口
RewriteRule ^/(.*) http://fully.qualified.domain.name:%{SERVER_PORT}/$1 [L,R] 重写
RewriteCond %{HTTP_HOST} !^fully\.qualified\.domain\.name [NC]
RewriteCond %{HTTP_HOST} !^$
RewriteRule ^/(.*) http://fully.qualified.domain.name/$1 [L,R]
URL根目录转移 RewriteEngine on
RewriteRule ^/$ /e/www/ [R] 从/移到/e/www/
末尾目录补斜线 RewriteEngine on
(目录范围内) RewriteBase /~quux/
RewriteRule ^foo$ foo/ [R] /~quux/foo是一个目录,补/
RewriteEngine on
RewriteBase /~quux/
RewriteCond %{REQUEST_FILENAME} -d 如果请文件名是个目录
RewriteRule ^(.+[^/>)$ $1/ [R] URL末尾不是斜线时补上
Web集群 RewriteEngine on
RewriteMap user-to-host txt:/path/to/map.user-to-host 用户-服务器映射
RewriteMap group-to-host txt:/path/to/map.group-to-host 组-服务器映射
RewriteMap entity-to-host txt:/path/to/map.entity-to-host 实体-服务器映射
RewriteRule ^/u/([^/>+)/?(.*) http://${user-to-host:$1|server0}/u/$1/$2 用户均衡
RewriteRule ^/g/([^/>+)/?(.*) http://${group-to-host:$1|server0}/g/$1/$2 组均衡
RewriteRule ^/e/([^/>+)/?(.*) http://${entity-to-host:$1|server0}/e/$1/$2 实体均衡
RewriteRule ^/([uge])/([^/>+)/?$ /$1/$2/.www/
RewriteRule ^/([uge])/([^/>+)/([^.]+.+) /$1/$2/.www/$3\
URL根目录搬迁 RewriteEngine on
RewriteRule ^/~(.+) http://newserver/~$1 [R,L] 到其它服务器
所用户名首字母分 RewriteEngine on
RewriteRule ^/~(([a-z])[a-z0-9]+)(.*) /home/$2/$1/.www$3 内一层括号为$2
NCSA imagemap移 RewriteEngine on
植为mod_imap RewriteRule ^/cgi-bin/imagemap(.*) $1 [PT]
多目录查找资源 RewriteEngine on
# first try to find it in custom/...
RewriteCond /your/docroot/dir1/%{REQUEST_FILENAME} -f
RewriteRule ^(.+) /your/docroot/dir1/$1 [L]
# second try to find it in pub/...
RewriteCond /your/docroot/dir2/%{REQUEST_FILENAME} -f
RewriteRule ^(.+) /your/docroot/dir2/$1 [L]
# else go on for other Alias or ScriptAlias directives,
RewriteRule ^(.+) - [PT]
据URL设置环境变量 RewriteEngine on
RewriteRule ^(.*)/S=([^/>+)/(.*) $1/$3 [E=STATUS:$2]
虚拟主机 RewriteEngine on
RewriteCond %{HTTP_HOST} ^www\.[^.]+\.host\.com$ 基于用户名
RewriteRule ^(.+) %{HTTP_HOST}$1 [C]
RewriteRule ^www\.([^.]+)\.host\.com(.*) /home/$1$2
内外人有别 RewriteEngine on
RewriteCond %{REMOTE_HOST} !^.+\.ourdomain\.com$ 基于远程主机
RewriteRule ^(/~.+) http://www.somewhere.com/$1 [R,L]
错误重定向 RewriteEngine on
RewriteCond /your/docroot/%{REQUEST_FILENAME} !-f 不是regular文件
RewriteRule ^(.+) http://webserverB.dom/$1
程序处理特殊协议 RewriteRule ^xredirect:(.+) /path/to/nph-xredirect.cgi/$1 \ Xredirect协议
[T=application/x-httpd-cgi,L]
最近镜像下载 RewriteEngine on
RewriteMap multiplex txt:/path/to/map.cxan 顶级域名与最近ftp服务器映射
RewriteRule ^/CxAN/(.*) %{REMOTE_HOST}::$1 [C]
RewriteRule ^.+\.([a-zA-Z]+)::(.*)$ ${multiplex:$1|ftp.default.dom}$2 [R,L] 据顶级域名不同提供不同的FTP服务器
基于时间重写 RewriteEngine on
RewriteCond %{TIME_HOUR}%{TIME_MIN} >0700
RewriteCond %{TIME_HOUR}%{TIME_MIN} <1900
RewriteRule ^foo\.html$ foo.day.html 白天为早晚7点间
RewriteRule ^foo\.html$ foo.night.html 其余为夜间
向前兼容扩展名 RewriteEngine on
RewriteBase /~quux/
# parse out basename, but remember the fact
RewriteRule ^(.*)\.html$ $1 [C,E=WasHTML:yes]
# rewrite to document.phtml if exists
RewriteCond %{REQUEST_FILENAME}.phtml -f 如果存在$1.phtml则重写
RewriteRule ^(.*)$ $1.phtml [S=1]
# else reverse the previous basename cutout
RewriteCond %{ENV:WasHTML} ^yes$ 如果不存在$1.phtml,则保持不变
RewriteRule ^(.*)$ $1.html
文件改名(目录级) RewriteEngine on 内部重写
RewriteBase /~quux/
RewriteRule ^foo\.html$ bar.html
RewriteEngine on 重定向由客户端再次提交
RewriteBase /~quux/
RewriteRule ^foo\.html$ bar.html [R]
据浏览器类型重写 RewriteCond %{HTTP_USER_AGENT} ^Mozilla/3.*
RewriteRule ^foo\.html$ foo.NS.html [L]
RewriteCond %{HTTP_USER_AGENT} ^Lynx/.* [OR]
RewriteCond %{HTTP_USER_AGENT} ^Mozilla/[12].*
RewriteRule ^foo\.html$ foo.20.html [L]
RewriteRule ^foo\.html$ foo.32.html [L]
动态镜像远程资源 RewriteEngine on
RewriteBase /~quux/
RewriteRule ^hotsheet/(.*)$ http://www.tstimpreso.com/hotsheet/$1 [P] 利用了代理模块
RewriteEngine on
RewriteBase /~quux/
RewriteRule ^usa-news\.html$ http://www.quux-corp.com/news/index.html [P]
反向动态镜像 RewriteEngine on
RewriteCond /mirror/of/remotesite/$1 -U
RewriteRule ^http://www\.remotesite\.com/(.*)$ /mirror/of/remotesite/$1
负载均衡 RewriteEngine on 利用代理实现round-robin效果
RewriteMap lb prg:/path/to/lb.pl
RewriteRule ^/(.+)$ ${lb:$1} [P,L]
#!/path/to/perl
$| = 1;
$name = "www"; # the hostname base
$first = 1; # the first server (not 0 here, because 0 is myself)
$last = 5; # the last server in the round-robin
$domain = "foo.dom"; # the domainname
$cnt = 0;
while (<STDIN>) {
$cnt = (($cnt+1) % ($last+1-$first));
$server = sprintf("%s%d.%s", $name, $cnt+$first, $domain);
print "http://$server/$_";
}
##EOF##
静态页面变脚本 RewriteEngine on
RewriteBase /~quux/
RewriteRule ^foo\.html$ foo.cgi [T=application/x-httpd-cgi]
阻击机器人 RewriteCond %{HTTP_USER_AGENT} ^NameOfBadRobot.*
RewriteCond %{REMOTE_ADDR} ^123\.45\.67\.[8-9]$
RewriteRule ^/~quux/foo/arc/.+ - [F]
阻止盗连你的图片 RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://www.quux-corp.de/~quux/.*$ [NC] 自己的连接可不能被阻止
RewriteRule .*\.gif$ - [F]
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !.*/foo-with-gif\.html$
RewriteRule ^inlined-in-foo\.gif$ - [F]
拒绝某些主机访问 RewriteEngine on
RewriteMap hosts-deny txt:/path/to/hosts.deny
RewriteCond ${hosts-deny:%{REMOTE_HOST}|NOT-FOUND} !=NOT-FOUND [OR]
RewriteCond ${hosts-deny:%{REMOTE_ADDR}|NOT-FOUND} !=NOT-FOUND
RewriteRule ^/.* - [F]
用户授权 RewriteCond %{REMOTE_IDENT}@%{REMOTE_HOST} !^friend1@client1.quux-corp\.com$
RewriteCond %{REMOTE_IDENT}@%{REMOTE_HOST} !^friend2@client2.quux-corp\.com$
RewriteCond %{REMOTE_IDENT}@%{REMOTE_HOST} !^friend3@client3.quux-corp\.com$
RewriteRule ^/~quux/only-for-friends/ - [F]
外部重写程序模板 RewriteEngine on
RewriteMap quux-map prg:/path/to/map.quux.pl
RewriteRule ^/~quux/(.*)$ /~quux/${quux-map:$1}
#!/path/to/perl
$| = 1;
while (<>) {
s|^foo/|bar/|;
print $_;
}
搜索引擎友好 RewriteRule ^/products$ /content.php
RewriteRule ^/products/([0-9]+)$ /content.php?id=$1
RewriteRule ^/products/([0-9]+),([ad]*),([0-9]{0,3}),([0-9]*),([0-9]*$) /marso/content.php?id=$1&sort=$2&order=$3&start=$4

参考文献:
URL Rewriting Guide
http://httpd.apache.org/docs/misc/rewriteguide.html

mod_rewrite: A Beginner's Guide to URL Rewriting
http://www.sitepoint.com/article/910

When installed on a virtual machine that is running a supported Linux operating system, Linux Integration Services for Hyper-V provides the following functionality:

  • Driver support for synthetic devices: Linux Integration Services supports the synthetic network controller and the synthetic storage controller that were developed specifically for Hyper-V.
  • Fastpath Boot Support for Hyper-V: Boot devices now take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.
  • Timesync: The clock inside the virtual machine will remain synchronized with the clock on the host.
  • Integrated Shutdown: Virtual machines running Linux can be shut down from either Hyper-V Manager or System Center Virtual Machine Manager, using the "Shut Down" command.
  • Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use up to 4 virtual processors (VP) per virtual machine.

 

1. Download Linux Integration Components v2.1

a. Download Linux Integration Components v2.1 from Microsoft Download site.

b. Extract the .exe file into a temp folder. You will need only LinuxIC v21.iso file.

c. Place the LinuxIC v21.iso file into your VM host server where you can mount the ISO as a CDROM for your Centon Virtual Machine.

 

2. Installation

You will need Development Tools installed to be able to compile the Integration Components. You could do that by running yum:

yum groupinstall "Development Tools"

NOTE: This command will try to use your network connection to download and if you used Synthetic Network Adapter for your Hyper-V machine you will not have internet connection. So make sure to install Development Tools during your Centos initial installation.

Now make sure you added LinuxIC v21.iso file to your Hyper-V CDROM as Image.

As the root user, mount the CD in the virtual machine by issuing the following commands at a shell prompt:

mkdir /mnt/cdrom
mount /dev/cdrom /mnt/cdrom

Copy Linux Integration Services to the virtual machine and unmount CDROM that no longer needed:

mkdir /opt/linux_ic_v21_rtm
cp –R /mnt/cdrom/* /opt/linux_ic_v21_rtm
umount /mnt/cdrom

As the root user, run the following command to compile and install the synthetic drivers:

cd /opt/linux_ic_v21_rtm/
make
make install

NOTE: If you're using the x64 version of Centos, you have perform an extra step.

转:http://hi.baidu.com/%C0%B4%C1%BD%B8%F9%CF%E3%B3%A6/blog/item/a59a610fa26207ed37d12224.html

各文档所在位置
日志文件 /var/log/varnish/varnish.log
可执行程序 /usr/local/varnish/bin
缓冲文件 /var/vcache
配置文件 /usr/local/varnish/vcl.conf
启动参数 /etc/sysconfig/varnish
启动脚本 /etc/rc.d/init.d/varnish
/etc/rc.d/init.d/varnishlog

创建www用户和组,以及Varnish缓存文件存放目录(/var/vcache):
/usr/sbin/groupadd www -g 48
/usr/sbin/useradd -u 48 -g www www
mkdir -p /var/vcache
chmod +w /var/vcache
chown -R www:www /var/vcache

创建日志文件,并授予www用户权限访问
mkdir -p /var/log/varnish
chmod +w /var/log/varnish
chown -R www:www /var/log/varnish

可能需要安装如下包
[CENTOS]yum install ncurses-devel
[UBUNTU]apt-get install libncurses5-dev

编译安装
./configure –prefix=/usr/local/varnish
make
make install

转:http://www.uplook.cn/kbase-Index-show-view10393.html

#!/bin/bash
# BY kerryhu
# MAIL:king_819@163.com
# BLOG:http://kerry.blog.51cto.com
# Please manual operation yum of before Operation.....
#============================ 更新系统时间 ============================
yum install -y ntp
ntpdate time.nist.gov
echo "00 01 * * * ntpdate time.nist.gov" >> /etc/crontab

#============================ Varnish安装 =============================

如果是RedHat/CentOS系统,在安装varnish的时候首先要安装以下软件包
automake
autoconf
libtool
ncurses-devel
libxslt
groff
pcre-devel
pkgconfig

groupadd www
useradd www -g www -s /sbin/nologin
mkdir -p /data/varnish/{cache,logs}
chmod +w /data/varnish/{cache,logs}
chown -R www:www /data/varnish/{cache,logs}
cd /opt
yum install -y automake autoconf libtool ncurses-devel libxslt groff pcre-devel pkgconfig
wget http://sourceforge.net/projects/varnish/files/varnish/2.1.3/varnish-2.1.3.tar.gz/download
tar -zxvf varnish-2.1.3.tar.gz
cd varnish-2.1.3
./configure --prefix=/usr/local/varnish
make;make install