使用Hive的正則解析器RegexSerDe分析nginx日志
1、環境:
hadoop-2.6.0 + apache-hive-1.2.0-bin
2、使用Hive分析nginx日志,網站的訪問日志部分內容為:
cat /home/hadoop/hivetestdata/nginx.txt
192.168.1.128 - - [09/Jan/2015:12:38:08 +0800] "GET /avatar/helloworld.png HTTP/1.1" 200 1521 "http://write.blog.csdn.net/postlist" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"
183.60.212.153 - - [19/Feb/2015:10:23:29 +0800] "GET /o2o/media.html?menu=3 HTTP/1.1" 200 16691 "-" "Mozilla/5.0 (compatible; EasouSpider; +http://www.easou.com/search/spider.html)"
這條日志裡面有九列,每列之間是用空格分割的,
每列的含義分別是客戶端訪問IP、用戶標識、用戶、訪問時間、請求頁面、請求狀態、返回文件的大小、跳轉來源、浏覽器UA。
我們使用Hive中的正則表達式匹配這九列:
([^ ]*) ([^ ]*) ([^ ]*) (\[.*\]) (\".*?\") (-|[0-9]*) (-|[0-9]*) (\".*?\") (\".*?\")
於此同時我們可以在Hive中指定解析文件的序列化和反序列化解析器(SerDe),並且在Hive中內置了一個org.apache.hadoop.hive.serde2.RegexSerDe正則解析器,我們可以直接使用它。
3、建表語句
CREATE TABLE logs
(
host STRING,
identity STRING,
username STRING,
time STRING,
request STRING,
status STRING,
size STRING,
referer STRING,
agent STRING
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (\\[.*\\]) (\".*?\") (-|[0-9]*) (-|[0-9]*) (\".*?\") (\".*?\")",
"output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s"
)
STORED AS TEXTFILE;
4、加載數據:
load data local inpath '/home/hadoop/hivetestdata/nginx.txt' into table logs;
查詢每小時的訪問量超過100的IP地址:
select substring(time, 2, 14) datetime ,host, count(*) as count
from logs
group by substring(time, 2, 14), host
having count > 100
sort by datetime, count;