Pig 開發手冊__Hadoop_開發人員指南_E-MapReduce-阿裏雲
在 Pig 中使用 OSS
在使用 OSS 路徑的時候,請使用類似如下的形式
oss://${AccessKeyId}:${AccessKeySecret}@${bucket}.${endpoint}/${path}
參數說明:
${accessKeyId}:您賬號的 AccessKeyId。
${accessKeySecret}:該 AccessKeyId 對應的密鑰。
${bucket}: 該 AccessKeyId 對應的 bucket。
${endpoint}:訪問 OSS 使用的網絡,由您集群所在的 region 決定,對應的 OSS 也需要是在集群對應的 region。
${path}:bucket 中的路徑。
具體的值請參考 OSS Endpoint
以 Pig 中帶的 script1-hadoop.pig 為例進行說明,將 Pig 中的 tutorial.jar 和 excite.log.bz2 上傳到 OSS 中,假設上傳路徑分別為oss://emr/jars/tutorial.jar
和oss://emr/data/excite.log.bz2
。
請參見如下操作步驟:
- 編寫腳本。將腳本中的 jar 文件路徑和輸入輸出路徑做了修改,如下所示。注意 OSS 路徑設置形式為
oss://${accesskeyId}:${accessKeySecret}@${bucket}.${endpoint}/object/path
。
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- Query Phrase Popularity (Hadoop cluster)
-- This script processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day.
-- Register the tutorial JAR file so that the included UDFs can be called in the script.
REGISTER oss://${AccessKeyId}:${AccessKeySecret}@${bucket}.${endpoint}/data/tutorial.jar;
-- Use the PigStorage function to load the excite log file into the ▒raw▒ bag as an array of records.
-- Input: (user,time,query)
raw = LOAD 'oss://${AccessKeyId}:${AccessKeySecret}@${bucket}.${endpoint}/data/excite.log.bz2' USING PigStorage('t') AS (user, time, query);
-- Call the NonURLDetector UDF to remove records if the query field is empty or a URL.
clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);
-- Call the ToLower UDF to change the query field to lowercase.
clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;
-- Because the log file only contains queries for a single day, we are only interested in the hour.
-- The excite query log timestamp format is YYMMDDHHMMSS.
-- Call the ExtractHour UDF to extract the hour (HH) from the time field.
houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;
-- Call the NGramGenerator UDF to compose the n-grams of the query.
ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;
-- Use the DISTINCT command to get the unique n-grams for all records.
ngramed2 = DISTINCT ngramed1;
-- Use the GROUP command to group records by n-gram and hour.
hour_frequency1 = GROUP ngramed2 BY (ngram, hour);
-- Use the COUNT function to get the count (occurrences) of each n-gram.
hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;
-- Use the GROUP command to group records by n-gram only.
-- Each group now corresponds to a distinct n-gram and has the count for each hour.
uniq_frequency1 = GROUP hour_frequency2 BY group::ngram;
-- For each group, identify the hour in which this n-gram is used with a particularly high frequency.
-- Call the ScoreGenerator UDF to calculate a "popularity" score for the n-gram.
uniq_frequency2 = FOREACH uniq_frequency1 GENERATE flatten($0), flatten(org.apache.pig.tutorial.ScoreGenerator($1));
-- Use the FOREACH-GENERATE command to assign names to the fields.
uniq_frequency3 = FOREACH uniq_frequency2 GENERATE $1 as hour, $0 as ngram, $2 as score, $3 as count, $4 as mean;
-- Use the FILTER command to move all records with a score less than or equal to 2.0.
filtered_uniq_frequency = FILTER uniq_frequency3 BY score > 2.0;
-- Use the ORDER command to sort the remaining records by hour and score.
ordered_uniq_frequency = ORDER filtered_uniq_frequency BY hour, score;
-- Use the PigStorage function to store the results.
-- Output: (hour, n-gram, score, count, average_counts_among_all_hours)
STORE ordered_uniq_frequency INTO 'oss://${AccessKeyId}:${AccessKeySecret}@${bucket}.${endpoint}/data/script1-hadoop-results' USING PigStorage();
創建作業。將步驟 1 中編寫的腳本存放到 OSS 上,假設存儲路徑為
oss://emr/jars/script1-hadoop.pig
,在 E-MapReduce 作業中創建如下作業:創建執行計劃並運行。在 E-MapReduce 執行計劃中創建執行計劃,將上一步創建好的 Pig 作業添加到執行計劃中,策略請選擇“立即執行”,這樣 script1-hadoop 作業就會在選定集群中運行起來了。
最後更新:2016-11-23 16:03:59
上一篇:
Hive 開發手冊__Hadoop_開發人員指南_E-MapReduce-阿裏雲
下一篇:
Hadoop Streaming__Hadoop_開發人員指南_E-MapReduce-阿裏雲
刪除所有特殊流控__流量控製相關接口_API_API 網關-阿裏雲
建站市場條件__服務商入駐_服務商_雲市場-阿裏雲
VPC網絡環境連接OSS地址失敗的解決方法__異常處理_用戶指南_專有網絡 VPC-阿裏雲
步驟 2:創建Linux實例__快速入門(Linux)_雲服務器 ECS-阿裏雲
CreatePolicyVersion__授權策略管理接口_RAM API文檔_訪問控製-阿裏雲
備份恢複服務__係統架構_產品簡介_雲數據庫 RDS 版-阿裏雲
創建訂單__訂單服務接口_API文檔_域名-阿裏雲
C++ SDK__SDK使用手冊_消息服務-阿裏雲
SDK快速入門__數據訂閱_用戶指南_數據傳輸-阿裏雲
RDS__操作事件(Event)樣例_用戶指南_操作審計-阿裏雲
相關內容
常見錯誤說明__附錄_大數據計算服務-阿裏雲
發送短信接口__API使用手冊_短信服務-阿裏雲
接口文檔__Android_安全組件教程_移動安全-阿裏雲
運營商錯誤碼(聯通)__常見問題_短信服務-阿裏雲
設置短信模板__使用手冊_短信服務-阿裏雲
OSS 權限問題及排查__常見錯誤及排除_最佳實踐_對象存儲 OSS-阿裏雲
消息通知__操作指南_批量計算-阿裏雲
設備端快速接入(MQTT)__快速開始_阿裏雲物聯網套件-阿裏雲
查詢API調用流量數據__API管理相關接口_API_API 網關-阿裏雲
使用STS訪問__JavaScript-SDK_SDK 參考_對象存儲 OSS-阿裏雲