一、校验数字的表达式
1 数字:^[0-9]*$
2 n位的数字:^\d{n}$
3 至少n位的数字:^\d{n,}$
4 m-n位的数字:^\d{m,n}$
5 零和非零开头的数字:^(0|[1-9][0-9]*)$
6 非零开头的最多带两位小数的数字:^([1-9][0-9]*)+(.[0-9]{1,2})?$
7 带1-2位小数的正数或负数:^(\-)?\d+(\.\d{1,2})?$
8 正数、负数、和小数:^(\-|\+)?\d+(\.\d+)?$
9 有两位小数的正实数:^[0-9]+(.[0-9]{2})?$
10 有1~3位小数的正实数:^[0-9]+(.[0-9]{1,3})?$
11 非零的正整数:^[1-9]\d*$ 或 ^([1-9][0-9]*){1,3}$ 或 ^\+?[1-9][0-9]*$
12 非零的负整数:^\-[1-9][]0-9″*$ 或 ^-[1-9]\d*$
13 非负整数:^\d+$ 或 ^[1-9]\d*|0$
14 非正整数:^-[1-9]\d*|0$ 或 ^((-\d+)|(0+))$
15 非负浮点数:^\d+(\.\d+)?$ 或 ^[1-9]\d*\.\d*|0\.\d*[1-9]\d*|0?\.0+|0$
16 非正浮点数:^((-\d+(\.\d+)?)|(0+(\.0+)?))$ 或 ^(-([1-9]\d*\.\d*|0\.\d*[1-9]\d*))|0?\.0+|0$
17 正浮点数:^[1-9]\d*\.\d*|0\.\d*[1-9]\d*$ 或 ^(([0-9]+\.[0-9]*[1-9][0-9]*)|([0-9]*[1-9][0-9]*\.[0-9]+)|([0-9]*[1-9][0-9]*))$
18 负浮点数:^-([1-9]\d*\.\d*|0\.\d*[1-9]\d*)$ 或 ^(-(([0-9]+\.[0-9]*[1-9][0-9]*)|([0-9]*[1-9][0-9]*\.[0-9]+)|([0-9]*[1-9][0-9]*)))$
19 浮点数:^(-?\d+)(\.\d+)?$ 或 ^-?([1-9]\d*\.\d*|0\.\d*[1-9]\d*|0?\.0+|0)$
二、校验字符的表达式
1 汉字:^[\u4e00-\u9fa5]{0,}$
2 英文和数字:^[A-Za-z0-9]+$ 或 ^[A-Za-z0-9]{4,40}$
3 长度为3-20的所有字符:^.{3,20}$
4 由26个英文字母组成的字符串:^[A-Za-z]+$
5 由26个大写英文字母组成的字符串:^[A-Z]+$
6 由26个小写英文字母组成的字符串:^[a-z]+$
7 由数字和26个英文字母组成的字符串:^[A-Za-z0-9]+$
8 由数字、26个英文字母或者下划线组成的字符串:^\w+$ 或 ^\w{3,20}$
9 中文、英文、数字包括下划线:^[\u4E00-\u9FA5A-Za-z0-9_]+$
10 中文、英文、数字但不包括下划线等符号:^[\u4E00-\u9FA5A-Za-z0-9]+$ 或 ^[\u4E00-\u9FA5A-Za-z0-9]{2,20}$
11 可以输入含有^%&’,;=?$\”等字符:[^%&’,;=?$\x22]+
12 禁止输入含有~的字符:[^~\x22]+
三、特殊需求表达式
1 Email地址:^\w+([-+.]\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*$
2 域名:[a-zA-Z0-9][-a-zA-Z0-9]{0,62}(/.[a-zA-Z0-9][-a-zA-Z0-9]{0,62})+/.?
3 InternetURL:[a-zA-z]+://[^\s]* 或 ^http://([\w-]+\.)+[\w-]+(/[\w-./?%&=]*)?$
4 手机号码:^(13[0-9]|14[5|7]|15[0|1|2|3|5|6|7|8|9]|18[0|1|2|3|5|6|7|8|9])\d{8}$
5 电话号码(“XXX-XXXXXXX”、”XXXX-XXXXXXXX”、”XXX-XXXXXXX”、”XXX-XXXXXXXX”、”XXXXXXX”和”XXXXXXXX):^(\(\d{3,4}-)|\d{3.4}-)?\d{7,8}$
6 国内电话号码(0511-4405222、021-87888822):\d{3}-\d{8}|\d{4}-\d{7}
7 身份证号(15位、18位数字):^\d{15}|\d{18}$
8 短身份证号码(数字、字母x结尾):^([0-9]){7,18}(x|X)?$ 或 ^\d{8,18}|[0-9x]{8,18}|[0-9X]{8,18}?$
9 帐号是否合法(字母开头,允许5-16字节,允许字母数字下划线):^[a-zA-Z][a-zA-Z0-9_]{4,15}$
10 密码(以字母开头,长度在6~18之间,只能包含字母、数字和下划线):^[a-zA-Z]\w{5,17}$
11 强密码(必须包含大小写字母和数字的组合,不能使用特殊字符,长度在8-10之间):^(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{8,10}$
12 日期格式:^\d{4}-\d{1,2}-\d{1,2}
13 一年的12个月(01~09和1~12):^(0?[1-9]|1[0-2])$
14 一个月的31天(01~09和1~31):^((0?[1-9])|((1|2)[0-9])|30|31)$
15 钱的输入格式:
16 1.有四种钱的表示形式我们可以接受:”10000.00” 和 “10,000.00”, 和没有 “分” 的 “10000” 和 “10,000”:^[1-9][0-9]*$
17 2.这表示任意一个不以0开头的数字,但是,这也意味着一个字符”0″不通过,所以我们采用下面的形式:^(0|[1-9][0-9]*)$
18 3.一个0或者一个不以0开头的数字.我们还可以允许开头有一个负号:^(0|-?[1-9][0-9]*)$
19 4.这表示一个0或者一个可能为负的开头不为0的数字.让用户以0开头好了.把负号的也去掉,因为钱总不能是负的吧.下面我们要加的是说明可能的小数部分:^[0-9]+(.[0-9]+)?$
20 5.必须说明的是,小数点后面至少应该有1位数,所以”10.”是不通过的,但是 “10” 和 “10.2” 是通过的:^[0-9]+(.[0-9]{2})?$
21 6.这样我们规定小数点后面必须有两位,如果你认为太苛刻了,可以这样:^[0-9]+(.[0-9]{1,2})?$
22 7.这样就允许用户只写一位小数.下面我们该考虑数字中的逗号了,我们可以这样:^[0-9]{1,3}(,[0-9]{3})*(.[0-9]{1,2})?$
23 8.1到3个数字,后面跟着任意个 逗号+3个数字,逗号成为可选,而不是必须:^([0-9]+|[0-9]{1,3}(,[0-9]{3})*)(.[0-9]{1,2})?$
24 备注:这就是最终结果了,别忘了”+”可以用”*”替代如果你觉得空字符串也可以接受的话(奇怪,为什么?)最后,别忘了在用函数时去掉去掉那个反斜杠,一般的错误都在这里
25 xml文件:^([a-zA-Z]+-?)+[a-zA-Z0-9]+\\.[x|X][m|M][l|L]$
26 中文字符的正则表达式:[\u4e00-\u9fa5]
27 双字节字符:[^\x00-\xff] (包括汉字在内,可以用来计算字符串的长度(一个双字节字符长度计2,ASCII字符计1))
28 空白行的正则表达式:\n\s*\r (可以用来删除空白行)
29 HTML标记的正则表达式:<(\S*?)[^>]*>.*?|<.*? /> (网上流传的版本太糟糕,上面这个也仅仅能部分,对于复杂的嵌套标记依旧无能为力)
30 首尾空白字符的正则表达式:^\s*|\s*$或(^\s*)|(\s*$) (可以用来删除行首行尾的空白字符(包括空格、制表符、换页符等等),非常有用的表达式)
31 腾讯QQ号:[1-9][0-9]{4,} (腾讯QQ号从10000开始)
32 中国邮政编码:[1-9]\d{5}(?!\d) (中国邮政编码为6位数字)
33 IP地址:\d+\.\d+\.\d+\.\d+ (提取IP地址时有用)

与基于接收机(Receiver-based Approach)的方法相比,directstream方法具有以下优点。
1. 简化并行性:自动创建n个rdd(和分区数目一致)。
2. 效率:直接读取效率高。
3. 完全一次的语义:能够很好的避免多次消费。

● Simplified Parallelism: No need to create multiple input Kafka streams and union them. With directStream, Spark Streaming will create as many RDD partitions as there are Kafka partitions to consume, which will all read data from Kafka in parallel. So there is a one-to-one mapping between Kafka and RDD partitions, which is easier to understand and tune.
● Efficiency: Achieving zero-data loss in the first approach required the data to be stored in a Write Ahead Log, which further replicated the data. This is actually inefficient as the data effectively gets replicated twice – once by Kafka, and a second time by the Write Ahead Log. This second approach eliminates the problem as there is no receiver, and hence no need for Write Ahead Logs. As long as you have sufficient Kafka retention, messages can be recovered from Kafka.
● Exactly-once semantics: The first approach uses Kafka’s high level API to store consumed offsets in Zookeeper. This is traditionally the way to consume data from Kafka. While this approach (in combination with write ahead logs) can ensure zero data loss (i.e. at-least once semantics), there is a small chance some records may get consumed twice under some failures. This occurs because of inconsistencies between data reliably received by Spark Streaming and offsets tracked by Zookeeper. Hence, in this second approach, we use simple Kafka API that does not use Zookeeper. Offsets are tracked by Spark

● Streaming within its checkpoints. This eliminates inconsistencies between Spark Streaming and Zookeeper/Kafka, and so each record is received by Spark Streaming effectively exactly once despite failures. In order to achieve exactly-once semantics for output of your results, your output operation that saves the data to an external data store must be either idempotent, or an atomic transaction that saves results and offsets (see Semantics of output operations in the main programming guide for further information).

请注意,这种方法的一个缺点是它不会在Zookeeper中更新偏移量,需要手工自己处理。

例子中偏移量存储在mysql数据库表格中,方便查阅。

/home/work/spark-1.6.0-cdh5.8.0/bin/spark-submit
–jars /home/work/spark-1.6.0-cdh5.8.0/lib/spark-assembly-1.6.0-cdh5.8.0-hadoop2.6.0-cdh5.8.0.jar,/home/work/spark-1.6.0-cdh5.8.0/lib/spark-streaming_2.10-1.6.0-cdh5.8.0.jar –conf spark.streaming.kafka.maxRatePerPartition=40
./rr.py 10.0.4.1:9092 nginx_www true
运行说明:
./rr.py brokerlist topic true/false(是否从mysql读取偏移量)
首次运行的时候mysql表中未存储偏移量所以最后一个参数用false。
杀死后再次启动用true即可从上次失败位置继续
数据库的配置在rr.py中设置。数据库表格的创建sql在源码sql中有。

rr.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
# 存储偏移量到mysql
from __future__ import print_function




import sys
import json
import traceback
import logging
import MySQLdb
import decimal
import urllib2
import time



from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils,TopicAndPartition



FORMAT = '%(asctime)-15s %(message)s'
logger = logging.getLogger('root')
logger.setLevel(logging.DEBUG)


class JSONObject:
    def __init__(self, d):
        self.__dict__ = d


gdbconf = {

    'ddseconds':10,
    'sparkdbconf' : {
        'host': '20.25.194.93',
        'port': 3306,
        'user': '****',
        'passwd': '*****',
        'db': 'mytest',
        'charset': 'utf8'
    }
}

import re




#处理每一行
def processrecord(line):
    import sys
    reload(sys)
    sys.setdefaultencoding("utf-8")
    line = line[1].decode('utf-8').encode('utf-8')

    try:

        theone = dict()
        fields = line.split('|')
        if len(fields)>25:
            return fields[25]

        return None



    except ValueError as e:
        #print(e)
        return None
        #return "【line json decode erro】"+line
    except :
        #print(traceback.format_exc())
        raise
    pass

def getoffset(topic):
    fromOffsets = dict()
    db = MySQLdb.connect(**gdbconf['sparkdbconf'])
    cursor=db.cursor()
    count = cursor.execute("select `partition`,`offset` from sparkstreaming where `topic`='%s' " %(topic))
    if count>=1:
        ofs = cursor.fetchall()
        for o in ofs:
            topicPartion = TopicAndPartition(topic,int(o[0]))
            fromOffsets[topicPartion] = long(o[1])
        return fromOffsets
    else:
        print("no offset found")
        exit(1)
    pass




def updateoffset(rdd):

    if rdd.isEmpty() is False:

        progress = 'logtime'

        db = MySQLdb.connect(**gdbconf['sparkdbconf'])
        db.autocommit(1)

        cursor=db.cursor()
        for o in rdd.offsetRanges():
            print(o.topic)
            print(o.partition,o.untilOffset,o.untilOffset)
            count = cursor.execute("INSERT INTO sparkstreaming (`topic`,`partition`,`offset`,`progress`) VALUES ('%s',%d,%d,'%s')  ON DUPLICATE KEY UPDATE `offset`=%d,`progress`='%s'" %(o.topic,o.partition,o.untilOffset,progress,o.untilOffset,progress))
            if count>=1:
                print("update offset success")
            else:
                print("offset update error")
        pass
        cursor.close()
        db.close()
    else:
        print("rdd is empty no need to update offset")

 #输出数据到数据库
def get_output(_, rdd):

    newrdd = rdd.map(processrecord).filter(lambda x: x is not None).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a + b)
    if newrdd.isEmpty() is False:

        try:
            updateoffset(rdd)
        except :
            traceback.print_exc()
        else:
            pass

        ##遍历rdd把所有的数据都拿过来
        for jstr in newrdd.collect():
            try:
                print(jstr)
            except:
                print(traceback.format_exc())
                #raise
        pass


if __name__ == "__main__":
    if len(sys.argv) != 4:
        print("Usage: xxx.py <broker_list> <topic> <fromlast>", file=sys.stderr)
        exit(-1)
    brokers, topic, fromlast  = sys.argv[1:]
    print("Creating new context")
    #create 2 local ddr
    sc = SparkContext("local[2]", "logsdk2")
    ssc = StreamingContext(sc, gdbconf['ddseconds'])

    fromOffsets = None
    if fromlast == "true":
        fromOffsets = getoffset(topic)
        pass

    orderkafkaDstream = KafkaUtils.createDirectStream(ssc, [topic], {"metadata.broker.list": brokers},fromOffsets)
    orderkafkaDstream.foreachRDD(get_output)

    ssc.start()
    ssc.awaitTermination()

创建mysql相应的表格

1
2
3
4
5
6
7
8
9
10
CREATE TABLE `sparkstreaming` (
  `id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
  `topic` VARCHAR(80) NOT NULL DEFAULT '' COMMENT 'kafka topic',
  `partition` INT(11) NOT NULL DEFAULT '0' COMMENT 'kafka partition',
  `offset` BIGINT(20) NOT NULL COMMENT '偏移量',
  `updatetime` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
  `progress` VARCHAR(50) DEFAULT NULL COMMENT '日志的时间进度(方便查看)',
  PRIMARY KEY (`id`),
  UNIQUE KEY `idx_topic_partition` (`topic`,`partition`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

参考文档:
https://spark.apache.org/docs/1.6.1/streaming-kafka-integration.html

当我们在 shell 的 bash 里操作多行内容的字符串,我们往往会想到 普通的字符串处理办法 例如:

1
2
string="Hello linux"
echo $string

其实 bash 提供了一个非常好的解决办法,就是 “Multi-line”
变量的基本使用
e.g. 包含变量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat > myfile.txt <<EOF
this file has $variable $names $inside
EOF


# 注入文档到 myfile.txt
cat myfile.txt
#输入:
#this file has

variable="ONE"
names="TWO"
inside="expanded variables"

cat > myfile.txt <<EOF
this file has $variable $names $inside
EOF


#print out the content of myfile.txt
cat myfile.txt
#输入:
#this file has ONE TWO expanded variables

无变量

1
2
3
4
5
6
7
8
9
cat > myfile.txt <<"EOF"
this file has $variable $dollar $name $inside
EOF

cat myfile.txt
#得到
#this file has $variable $dollar $name $inside

#PS:引用符号 "EOF" 决定是否需要输入变量

无变量 – 例子 2

1
2
3
4
5
6
7
8
9
cat > myfile.txt <<EOF
this file has $variable \$dollar \$name \$inside
EOF


cat myfile.txt
# 得到
# this file has $variable $dollar $name $inside

#转义 dollar "$" 符号,bash将取消变量的解析

将一个多行文本赋值到变量里面
例1:

1
2
3
4
5
6
7
8
read -d '' stringvar <<-"_EOF_"

all the leading dollars in the $variable $name are $retained

_EOF_
# 输入变量
echo $stringvar;
# all the leading dollars in the $variable $name are $retained

例2:

1
2
3
4
5
6
read -d '' help <<- "_EOF_"
  usage: up [--level <n>| -n <levels>][--help][--version]

  Report bugs to:
  up home page:
_EOF_

例3:

1
2
3
4
5
6
VARIABLE1="<?xml version="1.0" encoding='UTF-8'?>
<report>
  <img src="a-vs-b.jpg"/>
  <caption>Thus is a future post on Multi Line Strings in bash
  <date>1511</date>-<date>1512</date>.</caption>
</report>"

例4:

1
2
3
4
5
6
7
8
9
VARIABLE2=$(cat <<EOF
<?xml version="1.0" encoding='UTF-8'?>
<report>
  <img src="a-vs-b.jpg"/>
  <caption>Thus is a future post on Multi Line Strings in bash
  <date>1511</date>-<date>1512</date>.</caption>
</report>
EOF
)

例5:

1
2
3
4
5
6
7
8
VARABLE3=`cat <<EOF
<?xml version="1.0" encoding='UTF-8'?>
<report>
  <img src="a-vs-b.jpg"/>
  <caption>Thus is a future post on Multi Line Strings in bash
  <date>1511</date>-<date>1512</date>.</caption>
</report>
EOF`

例6 (直接写入文件):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat > heredocfile.txt <<_EOF_
I am line 1
I am line 2
I'm the last line
_EOF_

# 测试
cat heredocfile.txt
# I am line 1
# I am line 2
# I'm the last line

# and then, change your echo statement to include the '-e' option
# which will turn on escape sequence processing:
echo -e $USAGE >&2

例7:

1
2
3
4
5
6
7
sudo cat > /aaaa.txt <<_EOF_
I am line 1
I am line 2
I'm the last line
_EOF_

# sudo and >>: permission denied

例8:

1
2
3
4
# create
sudo tee /aaa.txt << EOF
  echo "Hello World 20314"
EOF

例9(可向文本文件追加):

1
2
3
4
# Append to Sudo
sudo tee -a  /aaa.txt << EOF
 echo "This Line is appended"
EOF

例如10:

1
2
3
4
5
6
sudo sh -c "cat > /aaa.txt" <<"EOT"
this text gets saved as sudo - $10 - ten dollars ...
EOT

cat /aaa.txt
#this text gets saved as sudo - $10 - ten dollars ...

例11:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat << "EOF" | sudo tee /aaa.txt
let's count
$one
two
$three
four

EOF

cat /aaa.txt
#let's count
#$one
#two
#$three
#four

关于 tee
> tee –help
Usage: tee [OPTION]… [FILE]…
Copy standard input to each FILE, and also to standard output.

-a, –append append to the given FILEs, do not overwrite
-i, –ignore-interrupts ignore interrupt signals
–help display this help and exit
–version output version information and exit

If a FILE is -, copy again to standard output.

Report tee bugs to bug-coreutils@gnu.org
GNU coreutils home page:
General help using GNU software:
For complete documentation, run: info coreutils ‘tee invocation’

参考:
1. Heredoc Quoting – Credit to Ignacio Vazquez-Abrams: http://serverfault.com/questions/399428/how-do-you-escape-characters-in-heredoc
2. eredoc Quoting – Credit to Dennis Williamson: http://stackoverflow.com/questions/3731513/how-do-you-type-a-tab-in-a-bash-here-document
3. http://serverfault.com/questions/72476/clean-way-to-write-complex-multi-line-string-to-a-variable
4. http://arstechnica.com/civis/viewtopic.php?p=21091503
5. http://superuser.com/questions/201829/sudo-permission-denied
6. http://stackoverflow.com/questions/4937792/using-variables-inside-a-bash-heredoc
7. http://stackoverflow.com/questions/2600783/how-does-the-vim-write-with-sudo-trick-work
8. http://www.unix.com/shell-programming-scripting/187477-variables-heredoc.html

来源:http://www.woola.net/detail/2016-09-05-bash-multi-line-text.html

Encrypting Your File

tar and gzip the file, then encrypt it using des3 and a secret key.
tar cvzf – mysql_dump.sql | openssl des3 -salt -k #YOUR PASSWORD# | dd of=encrypted_mysql_dump
That simple!

Decrypting Your File

dd if=encrypted_mysql_dump |openssl des3 -d -k #YOUR PASSWORD# |tar xvzf –

1
2
ls -ial   #获取文件节点
find . -maxdepth 1 -type f -inum 748010  -delete   #通过节点删除

1. 每个文件有唯一的索引号
2. ls -i 可获得索引号
3. find命令重命名:
find . -inum 索引号 -exec mv {} newname \;
-exec后为shell命令,{}代表当前文件名,\;表示shell命令结束
4. 批量重命名:
ls -i | awk ‘{printf(“find . -inum %s -exec mv {} %03d.txt \;\n”,$1,++i)}’ | sh
awk的printf命令与C语言类似,$1表示已空格分隔的第一个参数,++i变量未初始化,默认为0