SlideShare a Scribd company logo
The Dream
Stream Team for
Pulsar & Spring
Tim Spann | Developer Advocate
The Dream Stream Team for Pulsar and Spring
Tim Spann
Developer Advocate
● FLiP(N) Stack = Flink, Pulsar and NiFi Stack
● Streaming Systems/ Data Architect
● Experience:
○ 15+ years of experience with batch and streaming
technologies including Pulsar, Flink, Spark, NiFi, Spring,
Java, Big Data, Cloud, MXNet, Hadoop, Datalakes, IoT
and more.
Our bags are packed.
Let’s begin the
Journey to
Real-Time
Unified Messaging
And
Streaming
SLIDES DOWNLOAD
● Introduction
● What is Apache Pulsar?
● Spring Apps
○ Native Pulsar
○ AMQP
○ MQTT
○ Kafka
● Demo
● Q&A
This is the cover art for the vinyl LP "Unknown
Pleasures" by the artist Joy Division. The cover
art copyright is believed to belong to the label,
Factory Records, or the graphic artist(s).
101
Unified
Messaging
Platform
Guaranteed
Message
Delivery
Resiliency Infinite
Scalability
WHERE?
Pulsar Global Adoption
WHAT?
Streaming
Consumer
Consumer
Consumer
Subscription
Shared
Failover
Consumer
Consumer
Subscription
In case of failure in
Consumer B-0
Consumer
Consumer
Subscription
Exclusive
X
Consumer
Consumer
Key-Shared
Subscription
Pulsar
Topic/Partition
Messaging
Unified Messaging
Model
Tenants / Namespaces / Topics
Tenants
(Compliance)
Tenants
(Data Services)
Namespace
(Microservices)
Topic-1
(Cust Auth)
Topic-1
(Location Resolution)
Topic-2
(Demographics)
Topic-1
(Budgeted Spend)
Topic-1
(Acct History)
Topic-1
(Risk Detection)
Namespace
(ETL)
Namespace
(Campaigns)
Namespace
(ETL)
Tenants
(Marketing)
Namespace
(Risk Assessment)
Pulsar Instance
Pulsar Cluster
Component Description
Value / data payload The data carried by the message. All Pulsar messages contain raw bytes, although
message data can also conform to data schemas.
Key Messages are optionally tagged with keys, used in partitioning and also is useful for
things like topic compaction.
Properties An optional key/value map of user-defined properties.
Producer name The name of the producer who produces the message. If you do not specify a producer
name, the default name is used. Message De-Duplication.
Sequence ID Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of
the message is its order in that sequence. Message De-Duplication.
Messages - The Basic Unit of Pulsar
Pulsar Subscription Modes
Different subscription modes
have different semantics:
Exclusive/Failover -
guaranteed order, single active
consumer
Shared - multiple active
consumers, no order
Key_Shared - multiple active
consumers, order for given key
Producer 1
Producer 2
Pulsar Topic
Subscription D
Consumer D-1
Consumer D-2
Key-Shared
<
K
1,
V
10
>
<
K
1,
V
11
>
<
K
1,
V
12
>
<
K
2
,V
2
0
>
<
K
2
,V
2
1>
<
K
2
,V
2
2
>
Subscription C
Consumer C-1
Consumer C-2
Shared
<
K
1,
V
10
>
<
K
2,
V
21
>
<
K
1,
V
12
>
<
K
2
,V
2
0
>
<
K
1,
V
11
>
<
K
2
,V
2
2
>
Subscription A Consumer A
Exclusive
Subscription B
Consumer B-1
Consumer B-2
In case of failure in
Consumer B-1
Failover
Flexible Pub/Sub API for Pulsar - Shared
Consumer consumer = client.newConsumer()
.topic("my-topic")
.subscriptionName("work-q-1")
.subscriptionType(SubType.Shared)
.subscribe();
Flexible Pub/Sub API for Pulsar - Failover
Consumer consumer = client.newConsumer()
.topic("my-topic")
.subscriptionName("stream-1")
.subscriptionType(SubType.Failover)
.subscribe();
Reader Interface
byte[] msgIdBytes = // Some byte
array
MessageId id =
MessageId.fromByteArray(msgIdBytes);
Reader<byte[]> reader =
pulsarClient.newReader()
.topic(topic)
.startMessageId(id)
.create();
Create a reader that will read from
some message between earliest and
latest.
Reader
Connectivity
• Libraries - (Java, Python, Go, NodeJS,
WebSockets, C++, C#, Scala, Kotlin,...)
• Functions - Lightweight Stream
Processing (Java, Python, Go)
• Connectors - Sources & Sinks
(Cassandra, Kafka, …)
• Protocol Handlers - AoP (AMQP), KoP
(Kafka), MoP (MQTT), RoP (RocketMQ)
• Processing Engines - Flink, Spark,
Presto/Trino via Pulsar SQL
• Data Offloaders - Tiered Storage - (S3)
hub.streamnative.io
Pulsar SQL
Presto/Trino workers can read
segments directly from
bookies (or offloaded storage)
in parallel.
Bookie
1
Segment 1
Producer Consumer
Broker 1
Topic1-Part1
Broker 2
Topic1-Part2
Broker 3
Topic1-Part3
Segment 2 Segment 3 Segment 4 Segment X
Segment 1
Segment 1 Segment 1
Segment 3 Segment 3
Segment 3
Segment 2
Segment 2
Segment 2
Segment 4
Segment 4
Segment 4
Segment X
Segment X
Segment X
Bookie
2
Bookie
3
Query
Coordinator
...
...
SQL Worker SQL Worker SQL Worker
SQL Worker
Query
Topic
Metadata
Kafka
On Pulsar
(KoP)
MQTT
On Pulsar
(MoP)
AMQP
On Pulsar
(AoP)
RocketMQ
On Pulsar
(RoP)
Schema Registry
Schema Registry
schema-1 (value=Avro/Protobuf/JSON) schema-2 (value=Avro/Protobuf/JSON) schema-3
(value=Avro/Protobuf/JSON)
Schema
Data
ID
Local Cache
for Schemas
+
Schema
Data
ID +
Local Cache
for Schemas
Send schema-1
(value=Avro/Protobuf/JSON) data
serialized per schema ID
Send (register)
schema (if not in
local cache)
Read schema-1
(value=Avro/Protobuf/JSON) data
deserialized per schema ID
Get schema by ID (if
not in local cache)
Producers Consumers
Pulsar Functions
● Lightweight
computation similar to
AWS Lambda.
● Specifically designed to
use Apache Pulsar as a
message bus.
● Function runtime can
be located within
Pulsar Broker.
A serverless event streaming
framework
import org.apache.pulsar.client.impl.schema.JSONSchema;
import org.apache.pulsar.functions.api.*;
public class AirQualityFunction implements Function<byte[], Void> {
@Override
public Void process(byte[] input, Context context) {
context.getLogger().debug("DBG:” + new String(input));
context.newOutputMessage(“topicname”,
JSONSchema.of(Observation.class))
.key(UUID.randomUUID().toString())
.property(“prop1”, “value1”)
.value(observation)
.send();
}
}
Your Code Here
Pulsar
Function SDK
● Consume messages from one
or more Pulsar topics.
● Apply user-supplied
processing logic to each
message.
● Publish the results of the
computation to another topic.
● Support multiple
programming languages
(Java, Python, Go)
● Can leverage 3rd-party
libraries to support the
execution of ML models on
the edge.
Pulsar Functions
You can start really
small!
● A Docker container
● A Standalone Node
● Small MiniKube Pods
● Free Tier SN Cloud
The Dream Stream Team for Pulsar and Spring
Pulsar - Spring
https://github.com/majusko/pulsar-java-spring-boot-starter
https://github.com/datastax/reactive-pulsar
https://pulsar.apache.org/docs/en/client-libraries-java/
https://github.com/zachelrath/pulsar-graceful-shutdown-java
Spring - Kafka - Pulsar
https://www.baeldung.com/spring-kafka
@Bean
public KafkaTemplate<String, Observation> kafkaTemplate() {
KafkaTemplate<String, Observation> kafkaTemplate =
new KafkaTemplate<String, Observation>(producerFactory());
return kafkaTemplate;
}
ProducerRecord<String, Observation> producerRecord = new ProducerRecord<>(topicName,
uuidKey.toString(),
message);
kafkaTemplate.send(producerRecord);
Spring - MQTT - Pulsar
https://roytuts.com/publish-subscribe-message-onto-mqtt-using-spring/
@Bean
public IMqttClient mqttClient(
@Value("${mqtt.clientId}") String clientId,
@Value("${mqtt.hostname}") String hostname,
@Value("${mqtt.port}") int port)
throws MqttException {
IMqttClient mqttClient = new MqttClient(
"tcp://" + hostname + ":" + port, clientId);
mqttClient.connect(mqttConnectOptions());
return mqttClient;
}
MqttMessage mqttMessage = new MqttMessage();
mqttMessage.setPayload(DataUtility.serialize(payload));
mqttMessage.setQos(0);
mqttMessage.setRetained(true);
mqttClient.publish(topicName, mqttMessage);
Spring - AMQP - Pulsar
https://www.baeldung.com/spring-amqp
rabbitTemplate.convertAndSend(topicName,
DataUtility.serializeToJSON(observation));
@Bean
public CachingConnectionFactory
connectionFactory() {
CachingConnectionFactory ccf =
new CachingConnectionFactory();
ccf.setAddresses(serverName);
return ccf;
}
Spring - Websockets - Pulsar
https://www.baeldung.com/websockets-spring
https://docs.streamnative.io/cloud/stable/connect/client/connect-websocket
Spring - Presto SQL/Trino -
Pulsar via Presto or JDBC
https://github.com/ifengkou/spring-boot-starter-data-presto
https://github.com/tspannhw/phillycrime-springboot-phoenix
https://github.com/tspannhw/FLiP-Into-Trino
https://trino.io/docs/current/installation/jdbc.html
Reactive Pulsar
https://github.com/datastax/reactive-pulsar
https://springone.io/2021/sessions/reactive-applications-with-apache-pulsar-and-s
pring-boot
https://github.com/lhotari/reactive-pulsar-showcase
https://www.slideshare.net/Pivotal/reactive-applications-with-apache-pulsar-and-s
pring-boot
<dependencies>
<dependency>
<groupId>com.github.lhotari</groupId>
<artifactId>reactive-pulsar-adapter</artifactId>
<version>0.2.0</version>
</dependency>
</dependencies>
REST + Spring Boot + Pulsar + Friends
StreamNative Hub
StreamNative Cloud
Unified Batch and Stream STORAGE
Offload
(Queuing + Streaming)
Apache Pulsar - Spring <-> Events <-> Spring <-> Anywhere
Tiered Storage
Pulsar
---
KoP
---
MoP
---
AoP
---
Websocket
Pulsar
Sink
Pulsar
Sink
Data Apps
Protocols
Data Driven Apps
Micro
Service
(Queuing + Streaming)
Demo
WHO?
Let’s Keep
in Touch!
Tim Spann
Developer Advocate
@PaaSDev
https://www.linkedin.com/in/timothyspann
https://github.com/tspannhw
Apache Pulsar Training
● Instructor-led courses
○ Pulsar Fundamentals
○ Pulsar Developers
○ Pulsar Operations
● On-demand learning with labs
● 300+ engineers, admins and
architects trained!
StreamNative Academy
Now Available
On-Demand
Pulsar Training
Academy.StreamNative.io
streamnative.io
Pulsar
Resources Page
Learn More
https://spring.io/blog/2020/08/03/creating-a-function-fo
r-consuming-data-and-generating-spring-cloud-stream
-sink-applications
https://streamnative.io/blog/engineering/2022-04-14-w
hat-the-flip-is-the-flip-stack/
https://streamnative.io/cloudforkafka/
https://github.com/tspannhw/airquality
https://github.com/tspannhw/pulsar-airquality-function
Spring Things
● https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/#_quick_start
● https://spring.io/guides/gs/spring-boot/
● https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/#_quick_start
● https://spring.io/projects/spring-amqp
● https://spring.io/projects/spring-kafka
● https://spring.io/blog/2019/10/15/simple-event-driven-microservices-with-spring-cloud-stream
● https://github.com/spring-projects/spring-integration-kafka
● https://github.com/spring-projects/spring-integration
● https://github.com/spring-projects/spring-data-relational
● https://github.com/spring-projects/spring-kafka
● https://github.com/spring-projects/spring-amqp
FLiP Stack Weekly
This week in Apache Flink, Apache Pulsar, Apache
NiFi, Apache Spark, Java and Open Source friends.
https://bit.ly/32dAJft
● Buffer
● Batch
● Route
● Filter
● Aggregate
● Enrich
● Replicate
● Dedupe
● Decouple
● Distribute
StreamNative Cloud
streamnative.io
Passionate and dedicated team.
Founded by the original developers of
Apache Pulsar.
StreamNative helps teams to capture,
manage, and leverage data using Pulsar’s
unified messaging and streaming
platform.
Founded By The
Creators Of Apache Pulsar
Sijie Guo
ASF Member
Pulsar/BookKeeper PMC
Founder and CEO
Jia Zhai
Pulsar/BookKeeper PMC
Co-Founder
Matteo Merli
ASF Member
Pulsar/BookKeeper PMC
CTO
Data veterans with extensive industry experience
Notices
Apache Pulsar™
Apache®, Apache Pulsar™, Pulsar™, Apache Flink®, Flink®, Apache Spark®, Spark®, Apache
NiFi®, NiFi® and the logo are either registered trademarks or trademarks of the Apache
Software Foundation in the United States and/or other countries. No endorsement by The
Apache Software Foundation is implied by the use of these marks.
Copyright © 2021-2022 The Apache Software Foundation. All Rights Reserved. Apache,
Apache Pulsar and the Apache feather logo are trademarks of The Apache Software
Foundation.
Extra Slides
Built-in
Back
Pressure
Producer<String> producer = client.newProducer(Schema.STRING)
.topic("SpringIOBarcelona2022")
.blockIfQueueFull(true) // enable blocking if queue is full
.maxPendingMessages(10) // max queue size
.create();
// During Back Pressure: the sendAsync call blocks
// with no room in queues
producer.newMessage()
.key("mykey")
.value("myvalue")
.sendAsync(); // can be a blocking call
Producing Object Events From Java
ProducerBuilder<Observation> producerBuilder =
pulsarClient.newProducer(JSONSchema.of(Observation.class))
.topic(topicName)
.producerName(producerName).sendTimeout(60,
TimeUnit.SECONDS);
Producer<Observation> producer = producerBuilder.create();
msgID = producer.newMessage()
.key(someUniqueKey)
.value(observation)
.send();
Producer-Consumer
Producer Consumer
Publisher sends data and
doesn't know about the
subscribers or their status.
All interactions go through
Pulsar and it handles all
communication.
Subscriber receives data
from publisher and never
directly interacts with it
Topic
Topic
Apache Pulsar: Messaging vs Streaming
Message Queueing - Queueing
systems are ideal for work queues
that do not require tasks to be
performed in a particular order.
Streaming - Streaming works
best in situations where the
order of messages is important.
56
➔ Perform in Real-Time
➔ Process Events as They Happen
➔ Joining Streams with SQL
➔ Find Anomalies Immediately
➔ Ordering and Arrival Semantics
➔ Continuous Streams of Data
DATA STREAMING
Pulsar’s Publish-Subscribe model
Broker
Subscription
Consumer 1
Consumer 2
Consumer 3
Topic
Producer 1
Producer 2
● Producers send messages.
● Topics are an ordered, named channel that producers
use to transmit messages to subscribed consumers.
● Messages belong to a topic and contain an arbitrary
payload.
● Brokers handle connections and routes
messages between producers / consumers.
● Subscriptions are named configuration rules
that determine how messages are delivered to
consumers.
● Consumers receive messages.
Using Pulsar for Java Apps
58
High performance
High security
Multiple data consumers:
Transactions, CDC, fraud
detection with ML
Large data
volumes, high
scalability
Multi-tenancy and
geo-replication
streamnative.io
Accessing historical as well as real-time data
Pub/sub model enables event streams to be sent from
multiple producers, and consumed by multiple consumers
To process large amounts of data in a highly scalable way
Testing via Phone MQTT
Simple Function (Pulsar SDK)
streamnative.io
streamnative.io
Sensors <->
Streaming Java Apps
StreamNative Hub
StreamNative Cloud
Unified Batch and Stream COMPUTING
Batch
(Batch + Stream)
Unified Batch and Stream STORAGE
Offload
(Queuing + Streaming)
Tiered Storage
Pulsar
---
KoP
---
MoP
---
Websocket
Pulsar
Sink
Streaming
Edge Gateway
Protocols
<-> Sensors <->
Apps
streamnative.io
StreamNative Hub
StreamNative Cloud
Unified Batch and Stream COMPUTING
Batch
(Batch + Stream)
Unified Batch and Stream STORAGE
Offload
(Queuing + Streaming)
Apache Flink - Apache Pulsar - Apache NiFi <-> Devices
Tiered Storage
Pulsar
---
KoP
---
MoP
---
Websocket
---
HTTP
Pulsar
Sink
Streaming
Edge Gateway
Protocols
End-to-End Streaming Edge App
ML Java Coding
(DJL)
https://github.com/tspannhw/airquality
https://github.com/tspannhw/FLiPN-AirQuality-REST
https://github.com/tspannhw/pulsar-airquality-function
https://github.com/tspannhw/FLiPN-DEVNEXUS-2022
https://github.com/tspannhw/FLiP-Pi-BreakoutGarden
Source
Code
import java.util.function.Function;
public class MyFunction implements Function<String, String> {
public String apply(String input) {
return doBusinessLogic(input);
}
}
Your Code Here
Pulsar Function
Java
Setting Subscription Type Java
Consumer<byte[]> consumer = pulsarClient.newConsumer()
.topic(topic)
.subscriptionName(“subscriptionName")
.subscriptionType(SubscriptionType.Shared)
.subscribe();
Subscribing to a Topic and setting
Subscription Name Java
Consumer<byte[]> consumer = pulsarClient.newConsumer()
.topic(topic)
.subscriptionName(“subscriptionName")
.subscribe();
Run a Local Standalone Bare Metal
wget
https://archive.apache.org/dist/pulsar/pulsar-2.10.0/apache-pulsar-2.10.0-
bin.tar.gz
tar xvfz apache-pulsar-2.10.0-bin.tar.gz
cd apache-pulsar-2.10.0
bin/pulsar standalone
(For Pulsar SQL Support)
bin/pulsar sql-worker start
https://pulsar.apache.org/docs/en/standalone/
Building Tenant, Namespace, Topics
bin/pulsar-admin tenants create conference
bin/pulsar-admin namespaces create conference/spring
bin/pulsar-admin tenants list
bin/pulsar-admin namespaces list conference
bin/pulsar-admin topics create persistent://conference/spring/first
bin/pulsar-admin topics list conference/spring
Simple Function (Java Native)
Multiple Sends
Pulsar IO Functions in Java
bin/pulsar-admin functions create --auto-ack true --jar a.jar --classname
"io.streamnative.Tim" --inputs "persistent://public/default/chat" --log-topic
"persistent://public/default/logs" --name Chat --output
"persistent://public/default/chatresult"
https://github.com/tspannhw/pulsar-pychat-function
Pulsar Producer
import java.util.UUID;
import java.net.URL;
import org.apache.pulsar.client.api.Producer;
import org.apache.pulsar.client.api.ProducerBuilder;
import org.apache.pulsar.client.api.PulsarClient;
import org.apache.pulsar.client.api.MessageId;
import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2;
PulsarClient client = PulsarClient.builder()
.serviceUrl(serviceUrl)
.authentication(
AuthenticationFactoryOAuth2.clientCredentials(
new URL(issuerUrl), new URL(credentialsUrl.), audience))
.build();
https://github.com/tspannhw/pulsar-pychat-function
Pulsar Simple Producer
String pulsarKey = UUID.randomUUID().toString();
String OS = System.getProperty("os.name").toLowerCase();
ProducerBuilder<byte[]> producerBuilder = client.newProducer().topic(topic)
.producerName("demo");
Producer<byte[]> producer = producerBuilder.create();
MessageId msgID = producer.newMessage().key(pulsarKey).value("msg".getBytes())
.property("device",OS).send();
producer.close();
client.close();
https://github.com/tspannhw/pulsar-pychat-function
Producer
Steps
(Async)
PulsarClient client = PulsarClient.builder() // Step 1 Create a client.
.serviceUrl("pulsar://broker1:6650")
.build(); // Client discovers all brokers
// Step 2 Create a producer from client.
Producer<String> producer = client.newProducer(Schema.STRING)
.topic("hellotopic")
.create(); // Topic lookup occurs, producer registered with broker
// Step 3 Message is built, which will be buffered in async.
CompletableFuture<String> msgFuture = producer.newMessage()
.key("mykey")
.value("myvalue")
.sendAsync(); // Client returns when message is stored in buffer
// Step 4 Message is flushed (usually automatically).
producer.flush();
// Step 5 Future returns once the message is sent and ack is returned.
msgFuture.get();
Producer
Steps
(Sync)
// Step 1 Create a client.
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar://broker1:6650")
.build(); // Client discovers all brokers
// Step 2 Create a producer from client.
Producer<String> producer = client.newProducer(Schema.STRING)
.topic("persistent://public/default/hellotopic")
.create(); // Topic lookup occurs, producer registered with
broker.
// Step 3 Message is built, which will be sent immediately.
producer.newMessage()
.key("mykey")
.value("myvalue")
.send(); // client waits for send and ack
Subscribing to a Topic and setting
Subscription Name Java
Consumer<byte[]> consumer = pulsarClient.newConsumer()
.topic(topic)
.subscriptionName(“subscriptionName")
.subscribe();
Fanout, Queueing, or Streaming
In Pulsar, you have flexibility to use different subscription modes.
If you want to... Do this... And use these subscription
modes...
achieve fanout messaging
among consumers
specify a unique subscription name
for each consumer
exclusive (or failover)
achieve message queuing
among consumers
share the same subscription name
among multiple consumers
shared
allow for ordered consumption
(streaming)
distribute work among any number of
consumers
exclusive, failover or key shared
Setting Subscription Type Java
Consumer<byte[]> consumer = pulsarClient.newConsumer()
.topic(topic)
.subscriptionName(“subscriptionName")
.subscriptionType(SubscriptionType.Shared)
.subscribe();
Delayed
Message
Producing
// Producer creation like normal
Producer<String> producer = client.newProducer(Schema.STRING)
.topic("hellotopic")
.create();
// With an explicit time
producer.newMessage()
.key("mykey")
.value("myvalue")
.deliverAt(millisSinceEpoch)
.send();
// Relative time
producer.newMessage()
.key("mykey")
.value("myvalue")
.deliverAfter(1, TimeUnit.HOURS)
.send();
Messaging
versus
Streaming
Messaging Use Cases Streaming Use Cases
Service x commands service y to
make some change.
Example: order service removing
item from inventory service
Moving large amounts of data to
another service (real-time ETL).
Example: logs to elasticsearch
Distributing messages that
represent work among n
workers.
Example: order processing not in
main “thread”
Periodic jobs moving large amounts of
data and aggregating to more
traditional stores.
Example: logs to s3
Sending “scheduled” messages.
Example: notification service for
marketing emails or push
notifications
Computing a near real-time aggregate
of a message stream, split among n
workers, with order being important.
Example: real-time analytics over page
views
Differences in consumption
Messaging Use Case Streaming Use Case
Retention The amount of data retained is
relatively small - typically only a day
or two of data at most.
Large amounts of data are retained,
with higher ingest volumes and
longer retention periods.
Throughput Messaging systems are not designed
to manage big “catch-up” reads.
Streaming systems are designed to
scale and can handle use cases
such as catch-up reads.
Producer
Routing
Modes
(when not using a key)
Routing
### --- Kafka-on-Pulsar KoP (Example from standalone.conf)
messagingProtocols=mqtt,kafka
allowAutoTopicCreationType=partitioned
kafkaListeners=PLAINTEXT://0.0.0.0:9092
brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendIndexMetad
ataInterceptor
brokerDeleteInactiveTopicsEnabled=false
kopAllowedNamespaces=true
requestTimeoutMs=60000
groupMaxSessionTimeoutMs=600000
### --- Kafka-on-Pulsar KoP (end)
87
Cleanup
bin/pulsar-admin topics delete persistent://meetup/newjersey/first
bin/pulsar-admin namespaces delete meetup/newjersey
bin/pulsar-admin tenants delete meetup
https://github.com/tspannhw/Meetup-YourFirstEventDrivenApp
Messaging Ordering Guarantees
To guarantee message ordering, architect Pulsar to
take advantage of subscription modes and topic
ordering guarantees.
Topic Ordering Guarantees
● Messages sent to a single topic or partition DO have an ordering guarantee.
● Messages sent to different partitions DO NOT have an ordering guarantee.
Subscription Mode Guarantees
● A single consumer can receive messages from the same partition in order using an exclusive or failover
subscription mode.
● Multiple consumers can receive messages from the same key in order using the key_shared subscription
mode.
streamnative.io
What’s Next?
Here are resources to continue your journey
with Apache Pulsar

More Related Content

DOCX
Keepalived+MaxScale+MariaDB_운영매뉴얼_1.0.docx
NeoClova
 
PPTX
Building Reliable Lakehouses with Apache Flink and Delta Lake
Flink Forward
 
PDF
[OpenInfra Days Korea 2018] (Track 2) Neutron LBaaS 어디까지 왔니? - Octavia 소개
OpenStack Korea Community
 
PPTX
Log management with ELK
Geert Pante
 
PPTX
Kafka Security
DataWorks Summit/Hadoop Summit
 
PDF
Kafka Security 101 and Real-World Tips
confluent
 
PDF
Producer Performance Tuning for Apache Kafka
Jiangjie Qin
 
PDF
Running Apache Spark Jobs Using Kubernetes
Databricks
 
Keepalived+MaxScale+MariaDB_운영매뉴얼_1.0.docx
NeoClova
 
Building Reliable Lakehouses with Apache Flink and Delta Lake
Flink Forward
 
[OpenInfra Days Korea 2018] (Track 2) Neutron LBaaS 어디까지 왔니? - Octavia 소개
OpenStack Korea Community
 
Log management with ELK
Geert Pante
 
Kafka Security 101 and Real-World Tips
confluent
 
Producer Performance Tuning for Apache Kafka
Jiangjie Qin
 
Running Apache Spark Jobs Using Kubernetes
Databricks
 

What's hot (20)

PDF
Keystone at openstack multi sites
Vietnam Open Infrastructure User Group
 
PPTX
Docker Networking - Common Issues and Troubleshooting Techniques
Sreenivas Makam
 
PDF
VPCs, Metrics Framework, Back pressure : MuleSoft Virtual Muleys Meetups
Angel Alberici
 
PPTX
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Flink Forward
 
PPTX
Introduction to Apache Kafka
Jeff Holoman
 
PPTX
Hashicorp Vault ppt
Shrey Agarwal
 
PDF
Terraform modules and best-practices - September 2018
Anton Babenko
 
PDF
Fluentd vs. Logstash for OpenStack Log Management
NTT Communications Technology Development
 
PDF
Splunk 6.4 Administration.pdf
nitinscribd
 
PPTX
Elk with Openstack
Arun prasath
 
PDF
The Patterns of Distributed Logging and Containers
SATOSHI TAGOMORI
 
PPTX
Secure your app with keycloak
Guy Marom
 
PDF
Eventing Things - A Netflix Original! (Nitin Sharma, Netflix) Kafka Summit SF...
confluent
 
PPTX
EMEA Airheads ClearPass guest with MAC- caching using Time Source
Aruba, a Hewlett Packard Enterprise company
 
PPTX
Apache Kafka Best Practices
DataWorks Summit/Hadoop Summit
 
PDF
Integrating Fiware Orion, Keyrock and Wilma
Dalton Valadares
 
PPSX
Event Sourcing & CQRS, Kafka, Rabbit MQ
Araf Karsh Hamid
 
PDF
Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, Preset
HostedbyConfluent
 
PPTX
Apache Airflow Introduction
Liangjun Jiang
 
PDF
The Complete Guide to Service Mesh
Aspen Mesh
 
Keystone at openstack multi sites
Vietnam Open Infrastructure User Group
 
Docker Networking - Common Issues and Troubleshooting Techniques
Sreenivas Makam
 
VPCs, Metrics Framework, Back pressure : MuleSoft Virtual Muleys Meetups
Angel Alberici
 
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Flink Forward
 
Introduction to Apache Kafka
Jeff Holoman
 
Hashicorp Vault ppt
Shrey Agarwal
 
Terraform modules and best-practices - September 2018
Anton Babenko
 
Fluentd vs. Logstash for OpenStack Log Management
NTT Communications Technology Development
 
Splunk 6.4 Administration.pdf
nitinscribd
 
Elk with Openstack
Arun prasath
 
The Patterns of Distributed Logging and Containers
SATOSHI TAGOMORI
 
Secure your app with keycloak
Guy Marom
 
Eventing Things - A Netflix Original! (Nitin Sharma, Netflix) Kafka Summit SF...
confluent
 
EMEA Airheads ClearPass guest with MAC- caching using Time Source
Aruba, a Hewlett Packard Enterprise company
 
Apache Kafka Best Practices
DataWorks Summit/Hadoop Summit
 
Integrating Fiware Orion, Keyrock and Wilma
Dalton Valadares
 
Event Sourcing & CQRS, Kafka, Rabbit MQ
Araf Karsh Hamid
 
Streaming Data Analytics with ksqlDB and Superset | Robert Stolz, Preset
HostedbyConfluent
 
Apache Airflow Introduction
Liangjun Jiang
 
The Complete Guide to Service Mesh
Aspen Mesh
 
Ad

Similar to The Dream Stream Team for Pulsar and Spring (20)

PDF
Let's keep it simple and streaming.pdf
VMware Tanzu
 
PDF
Let's keep it simple and streaming
Timothy Spann
 
PDF
Unified Messaging and Data Streaming 101
Timothy Spann
 
PDF
Machine Intelligence Guild_ Build ML Enhanced Event Streaming Applications wi...
Timothy Spann
 
PDF
Living the Stream Dream with Pulsar and Spring Boot
Timothy Spann
 
PDF
Living the Stream Dream with Pulsar and Spring Boot
Timothy Spann
 
PDF
bigdata 2022_ FLiP Into Pulsar Apps
Timothy Spann
 
PDF
Timothy Spann: Apache Pulsar for ML
Edunomica
 
PDF
Princeton Dec 2022 Meetup_ StreamNative and Cloudera Streaming
Timothy Spann
 
PDF
[March sn meetup] apache pulsar + apache nifi for cloud data lake
Timothy Spann
 
PDF
Apache Pulsar Development 101 with Python
Timothy Spann
 
PDF
(Current22) Let's Monitor The Conditions at the Conference
Timothy Spann
 
PDF
Let’s Monitor Conditions at the Conference With Timothy Spann & David Kjerrum...
HostedbyConfluent
 
PDF
OSA Con 2022 - Streaming Data Made Easy - Tim Spann & David Kjerrumgaard - St...
Altinity Ltd
 
PDF
OSA Con 2022: Streaming Data Made Easy
Timothy Spann
 
PDF
Devfest uk & ireland using apache nifi with apache pulsar for fast data on-r...
Timothy Spann
 
PDF
Why Spring Belongs In Your Data Stream (From Edge to Multi-Cloud)
Timothy Spann
 
PDF
JConf.dev 2022 - Apache Pulsar Development 101 with Java
Timothy Spann
 
PDF
DevNexus: Apache Pulsar Development 101 with Java
Timothy Spann
 
PDF
Princeton Dec 2022 Meetup_ NiFi + Flink + Pulsar
Timothy Spann
 
Let's keep it simple and streaming.pdf
VMware Tanzu
 
Let's keep it simple and streaming
Timothy Spann
 
Unified Messaging and Data Streaming 101
Timothy Spann
 
Machine Intelligence Guild_ Build ML Enhanced Event Streaming Applications wi...
Timothy Spann
 
Living the Stream Dream with Pulsar and Spring Boot
Timothy Spann
 
Living the Stream Dream with Pulsar and Spring Boot
Timothy Spann
 
bigdata 2022_ FLiP Into Pulsar Apps
Timothy Spann
 
Timothy Spann: Apache Pulsar for ML
Edunomica
 
Princeton Dec 2022 Meetup_ StreamNative and Cloudera Streaming
Timothy Spann
 
[March sn meetup] apache pulsar + apache nifi for cloud data lake
Timothy Spann
 
Apache Pulsar Development 101 with Python
Timothy Spann
 
(Current22) Let's Monitor The Conditions at the Conference
Timothy Spann
 
Let’s Monitor Conditions at the Conference With Timothy Spann & David Kjerrum...
HostedbyConfluent
 
OSA Con 2022 - Streaming Data Made Easy - Tim Spann & David Kjerrumgaard - St...
Altinity Ltd
 
OSA Con 2022: Streaming Data Made Easy
Timothy Spann
 
Devfest uk & ireland using apache nifi with apache pulsar for fast data on-r...
Timothy Spann
 
Why Spring Belongs In Your Data Stream (From Edge to Multi-Cloud)
Timothy Spann
 
JConf.dev 2022 - Apache Pulsar Development 101 with Java
Timothy Spann
 
DevNexus: Apache Pulsar Development 101 with Java
Timothy Spann
 
Princeton Dec 2022 Meetup_ NiFi + Flink + Pulsar
Timothy Spann
 
Ad

More from Timothy Spann (20)

PDF
14May2025_TSPANN_FromAirQualityUnstructuredData.pdf
Timothy Spann
 
PDF
Streaming AI Pipelines with Apache NiFi and Snowflake NYC 2025
Timothy Spann
 
PDF
2025-03-03-Philly-AAAI-GoodData-Build Secure RAG Apps With Open LLM
Timothy Spann
 
PDF
Conf42_IoT_Dec2024_Building IoT Applications With Open Source
Timothy Spann
 
PDF
2024 Dec 05 - PyData Global - Tutorial Its In The Air Tonight
Timothy Spann
 
PDF
2024Nov20-BigDataEU-RealTimeAIWithOpenSource
Timothy Spann
 
PDF
TSPANN-2024-Nov-CloudX-Adding Generative AI to Real-Time Streaming Pipelines
Timothy Spann
 
PDF
2024-Nov-BuildStuff-Adding Generative AI to Real-Time Streaming Pipelines
Timothy Spann
 
PDF
14 November 2024 - Conf 42 - Prompt Engineering - Codeless Generative AI Pipe...
Timothy Spann
 
PDF
2024 Nov 05 - Linux Foundation TAC TALK With Milvus
Timothy Spann
 
PPTX
tspann06-NOV-2024_AI-Alliance_NYC_ intro to Data Prep Kit and Open Source RAG
Timothy Spann
 
PDF
tspann08-Nov-2024_PyDataNYC_Unstructured Data Processing with a Raspberry Pi ...
Timothy Spann
 
PDF
2024-10-28 All Things Open - Advanced Retrieval Augmented Generation (RAG) Te...
Timothy Spann
 
PDF
10-25-2024_BITS_NYC_Unstructured Data and LLM_ What, Why and How
Timothy Spann
 
PDF
2024-OCT-23 NYC Meetup - Unstructured Data Meetup - Unstructured Halloween
Timothy Spann
 
PDF
DBTA Round Table with Zilliz and Airbyte - Unstructured Data Engineering
Timothy Spann
 
PDF
17-October-2024 NYC AI Camp - Step-by-Step RAG 101
Timothy Spann
 
PDF
11-OCT-2024_AI_101_CryptoOracle_UnstructuredData
Timothy Spann
 
PDF
2024-10-04 - Grace Hopper Celebration Open Source Day - Stefan
Timothy Spann
 
PDF
01-Oct-2024_PES-VectorDatabasesAndAI.pdf
Timothy Spann
 
14May2025_TSPANN_FromAirQualityUnstructuredData.pdf
Timothy Spann
 
Streaming AI Pipelines with Apache NiFi and Snowflake NYC 2025
Timothy Spann
 
2025-03-03-Philly-AAAI-GoodData-Build Secure RAG Apps With Open LLM
Timothy Spann
 
Conf42_IoT_Dec2024_Building IoT Applications With Open Source
Timothy Spann
 
2024 Dec 05 - PyData Global - Tutorial Its In The Air Tonight
Timothy Spann
 
2024Nov20-BigDataEU-RealTimeAIWithOpenSource
Timothy Spann
 
TSPANN-2024-Nov-CloudX-Adding Generative AI to Real-Time Streaming Pipelines
Timothy Spann
 
2024-Nov-BuildStuff-Adding Generative AI to Real-Time Streaming Pipelines
Timothy Spann
 
14 November 2024 - Conf 42 - Prompt Engineering - Codeless Generative AI Pipe...
Timothy Spann
 
2024 Nov 05 - Linux Foundation TAC TALK With Milvus
Timothy Spann
 
tspann06-NOV-2024_AI-Alliance_NYC_ intro to Data Prep Kit and Open Source RAG
Timothy Spann
 
tspann08-Nov-2024_PyDataNYC_Unstructured Data Processing with a Raspberry Pi ...
Timothy Spann
 
2024-10-28 All Things Open - Advanced Retrieval Augmented Generation (RAG) Te...
Timothy Spann
 
10-25-2024_BITS_NYC_Unstructured Data and LLM_ What, Why and How
Timothy Spann
 
2024-OCT-23 NYC Meetup - Unstructured Data Meetup - Unstructured Halloween
Timothy Spann
 
DBTA Round Table with Zilliz and Airbyte - Unstructured Data Engineering
Timothy Spann
 
17-October-2024 NYC AI Camp - Step-by-Step RAG 101
Timothy Spann
 
11-OCT-2024_AI_101_CryptoOracle_UnstructuredData
Timothy Spann
 
2024-10-04 - Grace Hopper Celebration Open Source Day - Stefan
Timothy Spann
 
01-Oct-2024_PES-VectorDatabasesAndAI.pdf
Timothy Spann
 

Recently uploaded (20)

PPT
Activate_Methodology_Summary presentatio
annapureddyn
 
PPTX
GALILEO CRS SYSTEM | GALILEO TRAVEL SOFTWARE
philipnathen82
 
PDF
Generating Union types w/ Static Analysis
K. Matthew Dupree
 
PPTX
Role Of Python In Programing Language.pptx
jaykoshti048
 
PDF
Key Features to Look for in Arizona App Development Services
Net-Craft.com
 
PDF
Balancing Resource Capacity and Workloads with OnePlan – Avoid Overloading Te...
OnePlan Solutions
 
PDF
Enhancing Healthcare RPM Platforms with Contextual AI Integration
Cadabra Studio
 
PPTX
Explanation about Structures in C language.pptx
Veeral Rathod
 
PPTX
Web Testing.pptx528278vshbuqffqhhqiwnwuq
studylike474
 
PPTX
TRAVEL APIs | WHITE LABEL TRAVEL API | TOP TRAVEL APIs
philipnathen82
 
PDF
Using licensed Data Loss Prevention (DLP) as a strategic proactive data secur...
Q-Advise
 
PPTX
slidesgo-unlocking-the-code-the-dynamic-dance-of-variables-and-constants-2024...
kr2589474
 
PPTX
Contractor Management Platform and Software Solution for Compliance
SHEQ Network Limited
 
PDF
New Download FL Studio Crack Full Version [Latest 2025]
imang66g
 
PDF
Summary Of Odoo 18.1 to 18.4 : The Way For Odoo 19
CandidRoot Solutions Private Limited
 
PDF
Adobe Illustrator Crack Full Download (Latest Version 2025) Pre-Activated
imang66g
 
DOCX
Can You Build Dashboards Using Open Source Visualization Tool.docx
Varsha Nayak
 
PPTX
Maximizing Revenue with Marketo Measure: A Deep Dive into Multi-Touch Attribu...
bbedford2
 
PDF
vAdobe Premiere Pro 2025 (v25.2.3.004) Crack Pre-Activated Latest
imang66g
 
PDF
Download iTop VPN Free 6.1.0.5882 Crack Full Activated Pre Latest 2025
imang66g
 
Activate_Methodology_Summary presentatio
annapureddyn
 
GALILEO CRS SYSTEM | GALILEO TRAVEL SOFTWARE
philipnathen82
 
Generating Union types w/ Static Analysis
K. Matthew Dupree
 
Role Of Python In Programing Language.pptx
jaykoshti048
 
Key Features to Look for in Arizona App Development Services
Net-Craft.com
 
Balancing Resource Capacity and Workloads with OnePlan – Avoid Overloading Te...
OnePlan Solutions
 
Enhancing Healthcare RPM Platforms with Contextual AI Integration
Cadabra Studio
 
Explanation about Structures in C language.pptx
Veeral Rathod
 
Web Testing.pptx528278vshbuqffqhhqiwnwuq
studylike474
 
TRAVEL APIs | WHITE LABEL TRAVEL API | TOP TRAVEL APIs
philipnathen82
 
Using licensed Data Loss Prevention (DLP) as a strategic proactive data secur...
Q-Advise
 
slidesgo-unlocking-the-code-the-dynamic-dance-of-variables-and-constants-2024...
kr2589474
 
Contractor Management Platform and Software Solution for Compliance
SHEQ Network Limited
 
New Download FL Studio Crack Full Version [Latest 2025]
imang66g
 
Summary Of Odoo 18.1 to 18.4 : The Way For Odoo 19
CandidRoot Solutions Private Limited
 
Adobe Illustrator Crack Full Download (Latest Version 2025) Pre-Activated
imang66g
 
Can You Build Dashboards Using Open Source Visualization Tool.docx
Varsha Nayak
 
Maximizing Revenue with Marketo Measure: A Deep Dive into Multi-Touch Attribu...
bbedford2
 
vAdobe Premiere Pro 2025 (v25.2.3.004) Crack Pre-Activated Latest
imang66g
 
Download iTop VPN Free 6.1.0.5882 Crack Full Activated Pre Latest 2025
imang66g
 

The Dream Stream Team for Pulsar and Spring

  • 1. The Dream Stream Team for Pulsar & Spring Tim Spann | Developer Advocate
  • 3. Tim Spann Developer Advocate ● FLiP(N) Stack = Flink, Pulsar and NiFi Stack ● Streaming Systems/ Data Architect ● Experience: ○ 15+ years of experience with batch and streaming technologies including Pulsar, Flink, Spark, NiFi, Spring, Java, Big Data, Cloud, MXNet, Hadoop, Datalakes, IoT and more.
  • 4. Our bags are packed. Let’s begin the Journey to Real-Time Unified Messaging And Streaming
  • 6. ● Introduction ● What is Apache Pulsar? ● Spring Apps ○ Native Pulsar ○ AMQP ○ MQTT ○ Kafka ● Demo ● Q&A This is the cover art for the vinyl LP "Unknown Pleasures" by the artist Joy Division. The cover art copyright is believed to belong to the label, Factory Records, or the graphic artist(s).
  • 10. WHAT?
  • 11. Streaming Consumer Consumer Consumer Subscription Shared Failover Consumer Consumer Subscription In case of failure in Consumer B-0 Consumer Consumer Subscription Exclusive X Consumer Consumer Key-Shared Subscription Pulsar Topic/Partition Messaging Unified Messaging Model
  • 12. Tenants / Namespaces / Topics Tenants (Compliance) Tenants (Data Services) Namespace (Microservices) Topic-1 (Cust Auth) Topic-1 (Location Resolution) Topic-2 (Demographics) Topic-1 (Budgeted Spend) Topic-1 (Acct History) Topic-1 (Risk Detection) Namespace (ETL) Namespace (Campaigns) Namespace (ETL) Tenants (Marketing) Namespace (Risk Assessment) Pulsar Instance Pulsar Cluster
  • 13. Component Description Value / data payload The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data schemas. Key Messages are optionally tagged with keys, used in partitioning and also is useful for things like topic compaction. Properties An optional key/value map of user-defined properties. Producer name The name of the producer who produces the message. If you do not specify a producer name, the default name is used. Message De-Duplication. Sequence ID Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the message is its order in that sequence. Message De-Duplication. Messages - The Basic Unit of Pulsar
  • 14. Pulsar Subscription Modes Different subscription modes have different semantics: Exclusive/Failover - guaranteed order, single active consumer Shared - multiple active consumers, no order Key_Shared - multiple active consumers, order for given key Producer 1 Producer 2 Pulsar Topic Subscription D Consumer D-1 Consumer D-2 Key-Shared < K 1, V 10 > < K 1, V 11 > < K 1, V 12 > < K 2 ,V 2 0 > < K 2 ,V 2 1> < K 2 ,V 2 2 > Subscription C Consumer C-1 Consumer C-2 Shared < K 1, V 10 > < K 2, V 21 > < K 1, V 12 > < K 2 ,V 2 0 > < K 1, V 11 > < K 2 ,V 2 2 > Subscription A Consumer A Exclusive Subscription B Consumer B-1 Consumer B-2 In case of failure in Consumer B-1 Failover
  • 15. Flexible Pub/Sub API for Pulsar - Shared Consumer consumer = client.newConsumer() .topic("my-topic") .subscriptionName("work-q-1") .subscriptionType(SubType.Shared) .subscribe();
  • 16. Flexible Pub/Sub API for Pulsar - Failover Consumer consumer = client.newConsumer() .topic("my-topic") .subscriptionName("stream-1") .subscriptionType(SubType.Failover) .subscribe();
  • 17. Reader Interface byte[] msgIdBytes = // Some byte array MessageId id = MessageId.fromByteArray(msgIdBytes); Reader<byte[]> reader = pulsarClient.newReader() .topic(topic) .startMessageId(id) .create(); Create a reader that will read from some message between earliest and latest. Reader
  • 18. Connectivity • Libraries - (Java, Python, Go, NodeJS, WebSockets, C++, C#, Scala, Kotlin,...) • Functions - Lightweight Stream Processing (Java, Python, Go) • Connectors - Sources & Sinks (Cassandra, Kafka, …) • Protocol Handlers - AoP (AMQP), KoP (Kafka), MoP (MQTT), RoP (RocketMQ) • Processing Engines - Flink, Spark, Presto/Trino via Pulsar SQL • Data Offloaders - Tiered Storage - (S3) hub.streamnative.io
  • 19. Pulsar SQL Presto/Trino workers can read segments directly from bookies (or offloaded storage) in parallel. Bookie 1 Segment 1 Producer Consumer Broker 1 Topic1-Part1 Broker 2 Topic1-Part2 Broker 3 Topic1-Part3 Segment 2 Segment 3 Segment 4 Segment X Segment 1 Segment 1 Segment 1 Segment 3 Segment 3 Segment 3 Segment 2 Segment 2 Segment 2 Segment 4 Segment 4 Segment 4 Segment X Segment X Segment X Bookie 2 Bookie 3 Query Coordinator ... ... SQL Worker SQL Worker SQL Worker SQL Worker Query Topic Metadata
  • 24. Schema Registry Schema Registry schema-1 (value=Avro/Protobuf/JSON) schema-2 (value=Avro/Protobuf/JSON) schema-3 (value=Avro/Protobuf/JSON) Schema Data ID Local Cache for Schemas + Schema Data ID + Local Cache for Schemas Send schema-1 (value=Avro/Protobuf/JSON) data serialized per schema ID Send (register) schema (if not in local cache) Read schema-1 (value=Avro/Protobuf/JSON) data deserialized per schema ID Get schema by ID (if not in local cache) Producers Consumers
  • 25. Pulsar Functions ● Lightweight computation similar to AWS Lambda. ● Specifically designed to use Apache Pulsar as a message bus. ● Function runtime can be located within Pulsar Broker. A serverless event streaming framework
  • 26. import org.apache.pulsar.client.impl.schema.JSONSchema; import org.apache.pulsar.functions.api.*; public class AirQualityFunction implements Function<byte[], Void> { @Override public Void process(byte[] input, Context context) { context.getLogger().debug("DBG:” + new String(input)); context.newOutputMessage(“topicname”, JSONSchema.of(Observation.class)) .key(UUID.randomUUID().toString()) .property(“prop1”, “value1”) .value(observation) .send(); } } Your Code Here Pulsar Function SDK
  • 27. ● Consume messages from one or more Pulsar topics. ● Apply user-supplied processing logic to each message. ● Publish the results of the computation to another topic. ● Support multiple programming languages (Java, Python, Go) ● Can leverage 3rd-party libraries to support the execution of ML models on the edge. Pulsar Functions
  • 28. You can start really small! ● A Docker container ● A Standalone Node ● Small MiniKube Pods ● Free Tier SN Cloud
  • 31. Spring - Kafka - Pulsar https://www.baeldung.com/spring-kafka @Bean public KafkaTemplate<String, Observation> kafkaTemplate() { KafkaTemplate<String, Observation> kafkaTemplate = new KafkaTemplate<String, Observation>(producerFactory()); return kafkaTemplate; } ProducerRecord<String, Observation> producerRecord = new ProducerRecord<>(topicName, uuidKey.toString(), message); kafkaTemplate.send(producerRecord);
  • 32. Spring - MQTT - Pulsar https://roytuts.com/publish-subscribe-message-onto-mqtt-using-spring/ @Bean public IMqttClient mqttClient( @Value("${mqtt.clientId}") String clientId, @Value("${mqtt.hostname}") String hostname, @Value("${mqtt.port}") int port) throws MqttException { IMqttClient mqttClient = new MqttClient( "tcp://" + hostname + ":" + port, clientId); mqttClient.connect(mqttConnectOptions()); return mqttClient; } MqttMessage mqttMessage = new MqttMessage(); mqttMessage.setPayload(DataUtility.serialize(payload)); mqttMessage.setQos(0); mqttMessage.setRetained(true); mqttClient.publish(topicName, mqttMessage);
  • 33. Spring - AMQP - Pulsar https://www.baeldung.com/spring-amqp rabbitTemplate.convertAndSend(topicName, DataUtility.serializeToJSON(observation)); @Bean public CachingConnectionFactory connectionFactory() { CachingConnectionFactory ccf = new CachingConnectionFactory(); ccf.setAddresses(serverName); return ccf; }
  • 34. Spring - Websockets - Pulsar https://www.baeldung.com/websockets-spring https://docs.streamnative.io/cloud/stable/connect/client/connect-websocket
  • 35. Spring - Presto SQL/Trino - Pulsar via Presto or JDBC https://github.com/ifengkou/spring-boot-starter-data-presto https://github.com/tspannhw/phillycrime-springboot-phoenix https://github.com/tspannhw/FLiP-Into-Trino https://trino.io/docs/current/installation/jdbc.html
  • 37. REST + Spring Boot + Pulsar + Friends
  • 38. StreamNative Hub StreamNative Cloud Unified Batch and Stream STORAGE Offload (Queuing + Streaming) Apache Pulsar - Spring <-> Events <-> Spring <-> Anywhere Tiered Storage Pulsar --- KoP --- MoP --- AoP --- Websocket Pulsar Sink Pulsar Sink Data Apps Protocols Data Driven Apps Micro Service (Queuing + Streaming)
  • 39. Demo
  • 40. WHO?
  • 41. Let’s Keep in Touch! Tim Spann Developer Advocate @PaaSDev https://www.linkedin.com/in/timothyspann https://github.com/tspannhw
  • 42. Apache Pulsar Training ● Instructor-led courses ○ Pulsar Fundamentals ○ Pulsar Developers ○ Pulsar Operations ● On-demand learning with labs ● 300+ engineers, admins and architects trained! StreamNative Academy Now Available On-Demand Pulsar Training Academy.StreamNative.io
  • 44. Spring Things ● https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/#_quick_start ● https://spring.io/guides/gs/spring-boot/ ● https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/#_quick_start ● https://spring.io/projects/spring-amqp ● https://spring.io/projects/spring-kafka ● https://spring.io/blog/2019/10/15/simple-event-driven-microservices-with-spring-cloud-stream ● https://github.com/spring-projects/spring-integration-kafka ● https://github.com/spring-projects/spring-integration ● https://github.com/spring-projects/spring-data-relational ● https://github.com/spring-projects/spring-kafka ● https://github.com/spring-projects/spring-amqp
  • 45. FLiP Stack Weekly This week in Apache Flink, Apache Pulsar, Apache NiFi, Apache Spark, Java and Open Source friends. https://bit.ly/32dAJft
  • 46. ● Buffer ● Batch ● Route ● Filter ● Aggregate ● Enrich ● Replicate ● Dedupe ● Decouple ● Distribute
  • 48. streamnative.io Passionate and dedicated team. Founded by the original developers of Apache Pulsar. StreamNative helps teams to capture, manage, and leverage data using Pulsar’s unified messaging and streaming platform.
  • 49. Founded By The Creators Of Apache Pulsar Sijie Guo ASF Member Pulsar/BookKeeper PMC Founder and CEO Jia Zhai Pulsar/BookKeeper PMC Co-Founder Matteo Merli ASF Member Pulsar/BookKeeper PMC CTO Data veterans with extensive industry experience
  • 50. Notices Apache Pulsar™ Apache®, Apache Pulsar™, Pulsar™, Apache Flink®, Flink®, Apache Spark®, Spark®, Apache NiFi®, NiFi® and the logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by The Apache Software Foundation is implied by the use of these marks. Copyright © 2021-2022 The Apache Software Foundation. All Rights Reserved. Apache, Apache Pulsar and the Apache feather logo are trademarks of The Apache Software Foundation.
  • 52. Built-in Back Pressure Producer<String> producer = client.newProducer(Schema.STRING) .topic("SpringIOBarcelona2022") .blockIfQueueFull(true) // enable blocking if queue is full .maxPendingMessages(10) // max queue size .create(); // During Back Pressure: the sendAsync call blocks // with no room in queues producer.newMessage() .key("mykey") .value("myvalue") .sendAsync(); // can be a blocking call
  • 53. Producing Object Events From Java ProducerBuilder<Observation> producerBuilder = pulsarClient.newProducer(JSONSchema.of(Observation.class)) .topic(topicName) .producerName(producerName).sendTimeout(60, TimeUnit.SECONDS); Producer<Observation> producer = producerBuilder.create(); msgID = producer.newMessage() .key(someUniqueKey) .value(observation) .send();
  • 54. Producer-Consumer Producer Consumer Publisher sends data and doesn't know about the subscribers or their status. All interactions go through Pulsar and it handles all communication. Subscriber receives data from publisher and never directly interacts with it Topic Topic
  • 55. Apache Pulsar: Messaging vs Streaming Message Queueing - Queueing systems are ideal for work queues that do not require tasks to be performed in a particular order. Streaming - Streaming works best in situations where the order of messages is important.
  • 56. 56 ➔ Perform in Real-Time ➔ Process Events as They Happen ➔ Joining Streams with SQL ➔ Find Anomalies Immediately ➔ Ordering and Arrival Semantics ➔ Continuous Streams of Data DATA STREAMING
  • 57. Pulsar’s Publish-Subscribe model Broker Subscription Consumer 1 Consumer 2 Consumer 3 Topic Producer 1 Producer 2 ● Producers send messages. ● Topics are an ordered, named channel that producers use to transmit messages to subscribed consumers. ● Messages belong to a topic and contain an arbitrary payload. ● Brokers handle connections and routes messages between producers / consumers. ● Subscriptions are named configuration rules that determine how messages are delivered to consumers. ● Consumers receive messages.
  • 58. Using Pulsar for Java Apps 58 High performance High security Multiple data consumers: Transactions, CDC, fraud detection with ML Large data volumes, high scalability Multi-tenancy and geo-replication
  • 59. streamnative.io Accessing historical as well as real-time data Pub/sub model enables event streams to be sent from multiple producers, and consumed by multiple consumers To process large amounts of data in a highly scalable way
  • 63. streamnative.io Sensors <-> Streaming Java Apps StreamNative Hub StreamNative Cloud Unified Batch and Stream COMPUTING Batch (Batch + Stream) Unified Batch and Stream STORAGE Offload (Queuing + Streaming) Tiered Storage Pulsar --- KoP --- MoP --- Websocket Pulsar Sink Streaming Edge Gateway Protocols <-> Sensors <-> Apps
  • 64. streamnative.io StreamNative Hub StreamNative Cloud Unified Batch and Stream COMPUTING Batch (Batch + Stream) Unified Batch and Stream STORAGE Offload (Queuing + Streaming) Apache Flink - Apache Pulsar - Apache NiFi <-> Devices Tiered Storage Pulsar --- KoP --- MoP --- Websocket --- HTTP Pulsar Sink Streaming Edge Gateway Protocols End-to-End Streaming Edge App
  • 67. import java.util.function.Function; public class MyFunction implements Function<String, String> { public String apply(String input) { return doBusinessLogic(input); } } Your Code Here Pulsar Function Java
  • 68. Setting Subscription Type Java Consumer<byte[]> consumer = pulsarClient.newConsumer() .topic(topic) .subscriptionName(“subscriptionName") .subscriptionType(SubscriptionType.Shared) .subscribe();
  • 69. Subscribing to a Topic and setting Subscription Name Java Consumer<byte[]> consumer = pulsarClient.newConsumer() .topic(topic) .subscriptionName(“subscriptionName") .subscribe();
  • 70. Run a Local Standalone Bare Metal wget https://archive.apache.org/dist/pulsar/pulsar-2.10.0/apache-pulsar-2.10.0- bin.tar.gz tar xvfz apache-pulsar-2.10.0-bin.tar.gz cd apache-pulsar-2.10.0 bin/pulsar standalone (For Pulsar SQL Support) bin/pulsar sql-worker start https://pulsar.apache.org/docs/en/standalone/
  • 71. Building Tenant, Namespace, Topics bin/pulsar-admin tenants create conference bin/pulsar-admin namespaces create conference/spring bin/pulsar-admin tenants list bin/pulsar-admin namespaces list conference bin/pulsar-admin topics create persistent://conference/spring/first bin/pulsar-admin topics list conference/spring
  • 74. Pulsar IO Functions in Java bin/pulsar-admin functions create --auto-ack true --jar a.jar --classname "io.streamnative.Tim" --inputs "persistent://public/default/chat" --log-topic "persistent://public/default/logs" --name Chat --output "persistent://public/default/chatresult" https://github.com/tspannhw/pulsar-pychat-function
  • 75. Pulsar Producer import java.util.UUID; import java.net.URL; import org.apache.pulsar.client.api.Producer; import org.apache.pulsar.client.api.ProducerBuilder; import org.apache.pulsar.client.api.PulsarClient; import org.apache.pulsar.client.api.MessageId; import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; PulsarClient client = PulsarClient.builder() .serviceUrl(serviceUrl) .authentication( AuthenticationFactoryOAuth2.clientCredentials( new URL(issuerUrl), new URL(credentialsUrl.), audience)) .build(); https://github.com/tspannhw/pulsar-pychat-function
  • 76. Pulsar Simple Producer String pulsarKey = UUID.randomUUID().toString(); String OS = System.getProperty("os.name").toLowerCase(); ProducerBuilder<byte[]> producerBuilder = client.newProducer().topic(topic) .producerName("demo"); Producer<byte[]> producer = producerBuilder.create(); MessageId msgID = producer.newMessage().key(pulsarKey).value("msg".getBytes()) .property("device",OS).send(); producer.close(); client.close(); https://github.com/tspannhw/pulsar-pychat-function
  • 77. Producer Steps (Async) PulsarClient client = PulsarClient.builder() // Step 1 Create a client. .serviceUrl("pulsar://broker1:6650") .build(); // Client discovers all brokers // Step 2 Create a producer from client. Producer<String> producer = client.newProducer(Schema.STRING) .topic("hellotopic") .create(); // Topic lookup occurs, producer registered with broker // Step 3 Message is built, which will be buffered in async. CompletableFuture<String> msgFuture = producer.newMessage() .key("mykey") .value("myvalue") .sendAsync(); // Client returns when message is stored in buffer // Step 4 Message is flushed (usually automatically). producer.flush(); // Step 5 Future returns once the message is sent and ack is returned. msgFuture.get();
  • 78. Producer Steps (Sync) // Step 1 Create a client. PulsarClient client = PulsarClient.builder() .serviceUrl("pulsar://broker1:6650") .build(); // Client discovers all brokers // Step 2 Create a producer from client. Producer<String> producer = client.newProducer(Schema.STRING) .topic("persistent://public/default/hellotopic") .create(); // Topic lookup occurs, producer registered with broker. // Step 3 Message is built, which will be sent immediately. producer.newMessage() .key("mykey") .value("myvalue") .send(); // client waits for send and ack
  • 79. Subscribing to a Topic and setting Subscription Name Java Consumer<byte[]> consumer = pulsarClient.newConsumer() .topic(topic) .subscriptionName(“subscriptionName") .subscribe();
  • 80. Fanout, Queueing, or Streaming In Pulsar, you have flexibility to use different subscription modes. If you want to... Do this... And use these subscription modes... achieve fanout messaging among consumers specify a unique subscription name for each consumer exclusive (or failover) achieve message queuing among consumers share the same subscription name among multiple consumers shared allow for ordered consumption (streaming) distribute work among any number of consumers exclusive, failover or key shared
  • 81. Setting Subscription Type Java Consumer<byte[]> consumer = pulsarClient.newConsumer() .topic(topic) .subscriptionName(“subscriptionName") .subscriptionType(SubscriptionType.Shared) .subscribe();
  • 82. Delayed Message Producing // Producer creation like normal Producer<String> producer = client.newProducer(Schema.STRING) .topic("hellotopic") .create(); // With an explicit time producer.newMessage() .key("mykey") .value("myvalue") .deliverAt(millisSinceEpoch) .send(); // Relative time producer.newMessage() .key("mykey") .value("myvalue") .deliverAfter(1, TimeUnit.HOURS) .send();
  • 83. Messaging versus Streaming Messaging Use Cases Streaming Use Cases Service x commands service y to make some change. Example: order service removing item from inventory service Moving large amounts of data to another service (real-time ETL). Example: logs to elasticsearch Distributing messages that represent work among n workers. Example: order processing not in main “thread” Periodic jobs moving large amounts of data and aggregating to more traditional stores. Example: logs to s3 Sending “scheduled” messages. Example: notification service for marketing emails or push notifications Computing a near real-time aggregate of a message stream, split among n workers, with order being important. Example: real-time analytics over page views
  • 84. Differences in consumption Messaging Use Case Streaming Use Case Retention The amount of data retained is relatively small - typically only a day or two of data at most. Large amounts of data are retained, with higher ingest volumes and longer retention periods. Throughput Messaging systems are not designed to manage big “catch-up” reads. Streaming systems are designed to scale and can handle use cases such as catch-up reads.
  • 87. ### --- Kafka-on-Pulsar KoP (Example from standalone.conf) messagingProtocols=mqtt,kafka allowAutoTopicCreationType=partitioned kafkaListeners=PLAINTEXT://0.0.0.0:9092 brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendIndexMetad ataInterceptor brokerDeleteInactiveTopicsEnabled=false kopAllowedNamespaces=true requestTimeoutMs=60000 groupMaxSessionTimeoutMs=600000 ### --- Kafka-on-Pulsar KoP (end) 87
  • 88. Cleanup bin/pulsar-admin topics delete persistent://meetup/newjersey/first bin/pulsar-admin namespaces delete meetup/newjersey bin/pulsar-admin tenants delete meetup https://github.com/tspannhw/Meetup-YourFirstEventDrivenApp
  • 89. Messaging Ordering Guarantees To guarantee message ordering, architect Pulsar to take advantage of subscription modes and topic ordering guarantees. Topic Ordering Guarantees ● Messages sent to a single topic or partition DO have an ordering guarantee. ● Messages sent to different partitions DO NOT have an ordering guarantee. Subscription Mode Guarantees ● A single consumer can receive messages from the same partition in order using an exclusive or failover subscription mode. ● Multiple consumers can receive messages from the same key in order using the key_shared subscription mode.
  • 90. streamnative.io What’s Next? Here are resources to continue your journey with Apache Pulsar