閱讀698 返回首頁    go 魔獸


Pig 作業配置__作業_用戶指南_E-MapReduce-阿裏雲

E-MapReduce 中,用戶申請集群的時候,默認為用戶提供了 Pig 環境,用戶可以直接使用 Pig 來創建和操作自己的表和數據。操作步驟如下。

  1. 用戶需要提前準備好 Pig 的腳本,例如:

    1. ```shell
    2. /*
    3. * Licensed to the Apache Software Foundation (ASF) under one
    4. * or more contributor license agreements. See the NOTICE file
    5. * distributed with this work for additional information
    6. * regarding copyright ownership. The ASF licenses this file
    7. * to you under the Apache License, Version 2.0 (the
    8. * "License"); you may not use this file except in compliance
    9. * with the License. You may obtain a copy of the License at
    10. *
    11. * https://www.apache.org/licenses/LICENSE-2.0
    12. *
    13. * Unless required by applicable law or agreed to in writing, software
    14. * distributed under the License is distributed on an "AS IS" BASIS,
    15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    16. * See the License for the specific language governing permissions and
    17. * limitations under the License.
    18. */
    19. -- Query Phrase Popularity (Hadoop cluster)
    20. -- This script processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day.
    21. -- Register the tutorial JAR file so that the included UDFs can be called in the script.
    22. REGISTER oss://emr/checklist/jars/chengtao/pig/tutorial.jar;
    23. -- Use the PigStorage function to load the excite log file into the “raw” bag as an array of records.
    24. -- Input: (user,time,query)
    25. raw = LOAD 'oss://emr/checklist/data/chengtao/pig/excite.log.bz2' USING PigStorage('t') AS (user, time, query);
    26. -- Call the NonURLDetector UDF to remove records if the query field is empty or a URL.
    27. clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);
    28. -- Call the ToLower UDF to change the query field to lowercase.
    29. clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;
    30. -- Because the log file only contains queries for a single day, we are only interested in the hour.
    31. -- The excite query log timestamp format is YYMMDDHHMMSS.
    32. -- Call the ExtractHour UDF to extract the hour (HH) from the time field.
    33. houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;
    34. -- Call the NGramGenerator UDF to compose the n-grams of the query.
    35. ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;
    36. -- Use the DISTINCT command to get the unique n-grams for all records.
    37. ngramed2 = DISTINCT ngramed1;
    38. -- Use the GROUP command to group records by n-gram and hour.
    39. hour_frequency1 = GROUP ngramed2 BY (ngram, hour);
    40. -- Use the COUNT function to get the count (occurrences) of each n-gram.
    41. hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;
    42. -- Use the GROUP command to group records by n-gram only.
    43. -- Each group now corresponds to a distinct n-gram and has the count for each hour.
    44. uniq_frequency1 = GROUP hour_frequency2 BY group::ngram;
    45. -- For each group, identify the hour in which this n-gram is used with a particularly high frequency.
    46. -- Call the ScoreGenerator UDF to calculate a "popularity" score for the n-gram.
    47. uniq_frequency2 = FOREACH uniq_frequency1 GENERATE flatten($0), flatten(org.apache.pig.tutorial.ScoreGenerator($1));
    48. -- Use the FOREACH-GENERATE command to assign names to the fields.
    49. uniq_frequency3 = FOREACH uniq_frequency2 GENERATE $1 as hour, $0 as ngram, $2 as score, $3 as count, $4 as mean;
    50. -- Use the FILTER command to move all records with a score less than or equal to 2.0.
    51. filtered_uniq_frequency = FILTER uniq_frequency3 BY score > 2.0;
    52. -- Use the ORDER command to sort the remaining records by hour and score.
    53. ordered_uniq_frequency = ORDER filtered_uniq_frequency BY hour, score;
    54. -- Use the PigStorage function to store the results.
    55. -- Output: (hour, n-gram, score, count, average_counts_among_all_hours)
    56. STORE ordered_uniq_frequency INTO 'oss://emr/checklist/data/chengtao/pig/script1-hadoop-results' USING PigStorage();
    57. ```
  2. 將該腳本保存到一個腳本文件中,例如叫 script1-hadoop-oss.pig,然後將該腳本上傳到 OSS 的某個目錄中(例如:oss://path/to/script1-hadoop-oss.pig)。

  3. 進入阿裏雲 E-MapReduce 控製台作業列表

  4. 單擊該頁右上角的創建作業,進入創建作業頁麵。

  5. 填寫作業名稱。

  6. 選擇 Pig 作業類型,表示創建的作業是一個 Pig 作業。這種類型的作業,其後台實際上是通過以下的方式提交。

    1. pig [user provided parameters]
  7. 應用參數選項框中填入 Pig 命令後續的參數。例如,如果需要使用剛剛上傳到 OSS 的 Pig 腳本,則填寫如下:

    1. -x mapreduce ossref://emr/checklist/jars/chengtao/pig/script1-hadoop-oss.pig

    您也可以單擊選擇 OSS 路徑,從 OSS 中進行瀏覽和選擇,係統會自動補齊 OSS 上 Pig 腳本的絕對路徑。請務必將 Pig 腳本的前綴修改為 ossref(單擊切換資源類型),以保證 E-MapReduce 可以正確下載該文件。

  8. 選擇執行失敗後策略。

  9. 單擊確定,Pig 作業即定義完成。

最後更新:2016-11-23 16:03:59

  上一篇:go Hive 作業配置__作業_用戶指南_E-MapReduce-阿裏雲
  下一篇:go Spark 作業配置__作業_用戶指南_E-MapReduce-阿裏雲