Why Apache Beam? A data Artisans perspective
https://cloud.google.com/dataflow/blog/dataflow-beam-and-spark-comparison
https://github.com/apache/incubator-beam
https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101
https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-102
https://data-artisans.com/why-apache-beam/
As the Dataflow SDK and the Runners were moving to Apache Incubator as Apache Beam, we were asked by Google to bring the Flink runner into the codebase of Beam, and become committers and PMC members in the new project. We decided to go full st(r)eam ahead with this opportunity as we believe that
(1) the Beam model is the future reference programming model for writing data applications in both stream and batch, and
(2) Flink is the definitive platform to execute these data applications. As Beam is now taking shape, Flink is currently the only practical execution engine for Beam programs outside Google’s Cloud.
Beam包括,SDK部分和runner部分,而在Google外,當前Flink是一個可以作為runner的比較好的選擇
Flink and Beam are completely aligned in their concepts, which makes the translation of Beam programs to Flink jobs both straightforward and very efficient.
With full support for concepts such as event time, watermarks, and triggers, and with the new features we are contributing to Flink, we believe that the superiority of the Flink runner will stick for the foreseeable future.
Flink和beam的設計和概念都很相似,所以從Beam programs翻譯到Flink jobs 是非常直接的。
One question that remains is what is the relationship between Flink’s own native API (DataStream), and the Beam API?
Will both of these continue to be supported, and is it confusing for developers to have two different APIs that in the end generate Flink jobs?
Our committers at data Artisans will continue to fully support both the Flink DataStream API (which is, as of Flink 1.0, stable and backwards compatible), as well as the Beam API as it evolves for Beam programs that run on Flink.
The differences between the two APIs are largely syntactical (and a matter of taste), and, we are working together with Google towards unifying the APIs, with the end goal of making the Beam and Flink APIs source compatible.
We believe that the two communities can learn from each other, and we encourage users to use either of the two APIs to implement their Flink jobs for stream data processing.
With the native Flink DataStream API you get an already mature and backwards-compatible API, built-in libraries (e.g., CEP and upcoming SQL), mature tooling and connectors, key-value state (with the ability to query that state in the future), and an API which fully utilizes all the features of Flink’s powerful engine.
With the Beam API, you get the option of portability down the line as more Beam runners mature.
Flink API和Beam API的區別是什麼?
區別主要是語法上的,並且會致力於unify兩種接口,但兩種接口會都有其存在的價值
API, model, and engine
To clarify our points above, we would like to explain what we mean by choice of API, choice of programming model, and choice of execution engine.
Currently, Beam has three available runners: the Google Cloud Dataflow proprietary runner by Google, as well as the Flink and Spark runners, included in the open source Apache Beam project. Let us look at this ecosystem, and add Flink and Spark themselves with their native APIs, as well as Storm:
可以看出Beam的好處,關鍵在於,API和Model的統一,雖然Engine可以是不一樣的
Even more, Google and data Artisans are working together to make the two APIs semantically equivalent, ironing out any minor inconsistencies.
This means that users of either API can switch with relatively low effort.
Our long term goal is to make the Beam and Flink DataStream APIs source-compatible, so that programs written in one can natively run on the other with no code changes.
If you choose to invest in the Beam programming model now, you have two options:
- Use the Flink DataStream API in Java and Scala
- Use the Beam API directly in Java (and soon Python) with the Flink runner
長期看, Beam和Flink的API會兼容,所以如果想用Beam編程模型,可以有兩種選擇,直接用Flink DataStream API或用Beam API加上Flink Runner
We recommend option 1 to users that want to get started immediately, using an already mature and backwards-compatible API, access to libraries (e.g., the existing CEP library and the upcoming SQL functionality), mature tooling and connectors (e.g., to Kafka), as well as an API that fully and natively utilizes all the existing and upcoming features of the Flink engine. In addition, we recommend the Flink native API for use cases that use Flink’s key-value state abstraction, and in the future Flink’s facilities for querying that state.
We recommend option 2 to users that want to keep the option of engine portability (as other Beam runners progress).
選擇BeamAPI的唯一好處是,可以做到各個runner的兼容;而顯然,Flink的API更豐富和成熟
In a recent blog post, Google compared Beam and Spark Streaming from a programming model perspective. They took a mobile gaming scenario, and implemented several use cases in Beam and Spark Streaming, focusing their analysis on how well are the following concerns separated in the code:
- results are calculated? Sums, joins, histograms, machine learning models?
- in event time are results calculated? Does the time each event originally occurred affect results? Are results aggregated in fixed windows, sessions, or a single global window?
- in processing time are results materialized? Does the time each event is observed within the system affect results? When are results emitted? Speculatively, as data evolve? When data arrive late and results must be revised? Some combination of these?
- do refinements of results relate? If additional data arrive and results change, are they independent and distinct, do they build upon one another, etc.?
用一個實際例子比較一下,Flink和Beam API的不同,不同顏色部分解決不同的問題,一共4個問題
Conclusion
We firmly believe that the Beam model is the correct programming model for streaming and batch data processing.
We encourage users to adopt this model for their future data applications, embodied in either the Beam API itself or the Flink DataStream API.
Further, we believe that Flink, with its current features and roadmap, is currently the most advanced open source stream processor, and at the same time the only practical solution for deploying Beam programs in production on on-premise or non-GCP clusters. We are looking forward to continue pushing the envelope in stream processing and enabling enterprises to use stream processing technology for their data applications.
最後更新:2017-04-07 21:25:37