Apache Kafka:与Storm集成(二)
创建螺栓
Bolt是一个将元组作为输入、处理元组并产生新元组作为输出的组件。螺栓将实现IRichBolt接口。在这个程序中, WordSplitter-Bolt和WordCounterBolt用于执行操作。
IRichBolt 接口有以下方法:
Prepare:为螺栓提供执行环境。执行器将运行此方法来初始化喷口。
Execute:处理单一输入元组。
Cleanup: 当螺栓要关闭时调用。
declareOutputFields:声明元组的输出模式。
让我们创建SplitBolt.java,它实现把一个句子分成单词的逻辑。CountBolt.java实现了把独特的单词分开并计算其出现次数的逻辑。
SplitBolt.java
import java.util.Map; import backtype.storm.tuple.Tuple; import backtype.storm.tuple.Fields; import backtype.storm.tuple.Values; import backtype.storm.task.OutputCollector; import backtype.storm.topology.OutputFieldsDeclarer; import backtype.storm.topology.IRichBolt; import backtype.storm.task.TopologyContext; public class SplitBolt implements IRichBolt { private OutputCollector collector; @Override public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) { this.collector = collector; } @Override public void execute(Tuple input) { String sentence = input.getString(0); String[] words = sentence.split(" "); for(String word: words) { word = word.trim(); if(!word.isEmpty()) { word = word.toLowerCase(); collector.emit(new Values(word)); } } collector.ack(input); } @Override public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("word")); } @Override public void cleanup() {} @Override public Map<String, Object> getComponentConfiguration() { return null; } }
CountBolt.java
import java.util.Map; import java.util.HashMap; import backtype.storm.tuple.Tuple; import backtype.storm.task.OutputCollector; import backtype.storm.topology.OutputFieldsDeclarer; import backtype.storm.topology.IRichBolt; import backtype.storm.task.TopologyContext; public class CountBolt implements IRichBolt{ Map<String, Integer> counters; private OutputCollector collector; @Override public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) { this.counters = new HashMap<String, Integer>(); this.collector = collector; } @Override public void execute(Tuple input) { String str = input.getString(0); if(!counters.containsKey(str)){ counters.put(str, 1); }else { Integer c = counters.get(str) +1; counters.put(str, c); } collector.ack(input); } @Override public void cleanup() { for(Map.Entry<String, Integer> entry:counters.entrySet()){ System.out.println(entry.getKey()+" : " + entry.getValue()); } } @Override public void declareOutputFields(OutputFieldsDeclarer declarer) { } @Override public Map<String, Object> getComponentConfiguration() { return null; } }
提交到拓扑
Storm拓扑是一个节俭的结构。TopologyBuilder类提供了简单易行的方法来创建复杂的拓扑。TopologyBuilder类有设置喷口(setSpout)和设置螺栓(set bolt)的方法。最后,TopologyBuilder使用createTopology方法来创建拓扑。shuffleGrouping和fieldsGrouping方法有助于为喷口和螺栓设置流分组。
本地集群:出于开发目的,我们可以使用LocalCluster对象创建本地集群,然后使用LocalCluster类的提交拓扑方法提交拓扑。
KafkaStormSample.java
import backtype.storm.Config; import backtype.storm.LocalCluster; import backtype.storm.topology.TopologyBuilder; import java.util.ArrayList; import java.util.List; import java.util.UUID; import backtype.storm.spout.SchemeAsMultiScheme; import storm.kafka.trident.GlobalPartitionInformation; import storm.kafka.ZkHosts; import storm.kafka.Broker; import storm.kafka.StaticHosts; import storm.kafka.BrokerHosts; import storm.kafka.SpoutConfig; import storm.kafka.KafkaConfig; import storm.kafka.KafkaSpout; import storm.kafka.StringScheme; public class KafkaStormSample { public static void main(String[] args) throws Exception{ Config config = new Config(); config.setDebug(true); config.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 1); String zkConnString = "localhost:2181"; String topic = "my-first-topic"; BrokerHosts hosts = new ZkHosts(zkConnString); SpoutConfig kafkaSpoutConfig = new SpoutConfig (hosts, topic, "/" + topic, UUID.randomUUID().toString()); kafkaSpoutConfig.bufferSizeBytes = 1024 * 1024 * 4; kafkaSpoutConfig.fetchSizeBytes = 1024 * 1024 * 4; kafkaSpoutConfig.forceFromStart = true; kafkaSpoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme()); TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("kafka-spout", new KafkaSpout(kafkaSpoutCon-fig)); builder.setBolt("word-spitter", new SplitBolt()).shuffleGroup-ing("kafka-spout"); builder.setBolt("word-counter", new CountBolt()).shuffleGroup-ing("word-spitter"); LocalCluster cluster = new LocalCluster(); cluster.submitTopology("KafkaStormSample", config, builder.create-Topology()); Thread.sleep(10000); cluster.shutdown(); } }
在编译之前,Kakfa-Storm集成需要ZooKeeper客户端库。Curator 2.9.1版支持Apache Storm 0 . 9 . 5版(我们在本教程中使用)。下载下面指定的jar文件,并将其放在java类路径中。
curator-client-2.9.1.jar curator-framework-2.9.1.jar
包含依赖文件后,使用以下命令编译程序,
javac -cp "/path/to/Kafka/apache-storm-0.9.5/lib/*" *.java
执行
启动Kafka生产者命令行界面(上一章已经解释过),创建一个名为“my-first-topic”的新主题,并提供一些示例消息,如下所示:
hello kafka storm spark test message another test message
现在使用以下命令执行应用程序:
java -cp “/path/to/Kafka/apache-storm-0.9.5/lib/*”:. KafkaStormSample
该应用程序的示例输出如下所示:
storm : 1 test : 2 spark : 1 another : 1 kafka : 1 hello : 1 message : 2