Graffersid Blogs
Graffersid Blogs
Graffersid Blogs
January 2, 2024
What is JMS?
Java Message Service (JMS) is a powerful and versatile messaging
standard that facilitates communication between distributed client
applications. Developed under the Java Community Process, JMS
provides a vendor-agnostic interface, fostering seamless interaction
between different components in a distributed architecture.
Advantages of JMS:
Interoperability: JMS’s vendor-neutral approach fosters
interoperability, allowing integration with diverse messaging
systems and technologies.
Reliability: Transactional support ensures message delivery even
in the face of system failures, enhancing the overall reliability of
distributed applications.
Flexibility: The support for different messaging models provides
flexibility in designing communication patterns tailored to specific
use cases.
Standardization: Being a Java standard, JMS benefits from a
large community and well-defined specifications, ensuring a
stable and mature messaging solution.
Robust Ecosystem: JMS integrates seamlessly with other Java-
based technologies, creating a robust ecosystem for building
complex, distributed systems.
Drawbacks of JMS:
Complex Configuration: Setting up and configuring JMS can be
complex, especially for users new to messaging systems,
potentially leading to longer deployment times.
Learning Curve: The learning curve for mastering JMS concepts
and best practices might be steep, requiring dedicated resources
for training and implementation.
Scalability Challenges: While JMS supports horizontal scalability,
managing large-scale deployments may pose challenges,
demanding careful consideration of architecture and
configuration.
Potential Latency: In certain scenarios, the synchronous nature
of point-to-point messaging may introduce latency, impacting
real-time communication requirements.
Limited Language Support: Although primarily designed for Java,
JMS has limited native support for other programming
languages, which might pose integration challenges in a polyglot
environment.
What is Kafka?
Apache Kafka stands as a distributed streaming platform that excels in
handling real-time data feeds and stream processing. Originally
developed by LinkedIn, Kafka has evolved into an open-source
powerhouse, providing a robust foundation for building scalable and
fault-tolerant data pipelines.
Advantages of Kafka:
Real-time Data Processing: Kafka’s ability to handle real-time
data streams makes it an ideal choice for applications requiring
low-latency processing and analytics.
Decoupling of Systems: Kafka provides a decoupled
architecture, allowing different components of a system to
operate independently, enhancing flexibility and resilience.
Elasticity and Scalability: The distributed nature of Kafka
facilitates elasticity, enabling seamless scalability to adapt to
changing workloads and data volumes.
Versatility: Kafka’s versatility extends beyond traditional
messaging, supporting use cases such as log aggregation, event
sourcing, and building data lakes.
Community and Ecosystem: Kafka benefits from a vibrant
open-source community and an extensive ecosystem of
connectors, tools, and integrations, making it well-supported and
adaptable.
Drawbacks of Kafka:
Complexity: Implementing and managing Kafka can be
complex, especially for users unfamiliar with distributed systems,
requiring a significant learning curve.
Resource Intensive: Kafka may be resource-intensive,
particularly in scenarios with high data volume and velocity,
necessitating careful consideration of hardware and
infrastructure.
Operational Overhead: Operating Kafka clusters involves
ongoing maintenance and monitoring, adding to the operational
overhead for organizations.
Limited Message Retention Policies: Defining and managing
message retention policies in Kafka can be challenging, requiring
careful planning to balance storage needs and data accessibility.
Integration Challenges: Integrating Kafka with existing systems
might pose challenges, especially when migrating from
traditional messaging solutions or when dealing with diverse
technology stacks.
1. Messaging Models:
2. Scalability:
3. Reliability:
6. Language Support:
JMS: Primarily designed for Java, with limited native support for
other programming languages.
Kafka: Offers better language support with official client libraries
for Java, Python, Go, and more, catering to a polyglot
environment.
8. Use Cases:
Kafka Popularity:
Market Trends:
Consideration Factors:
The choice between JMS and Kafka often depends on the specific
requirements of the project, existing technology stack, and the nature
of data processing needs. While JMS remains a solid choice for
traditional enterprise messaging scenarios, Kafka’s popularity
continues to rise, driven by the demands of modern, data-intensive
applications.
In conclusion, the decision between JMS and Kafka is not binary but
strategic, rooted in the unique characteristics and aspirations of your
organization. By aligning the strengths of each messaging solution
with your specific use cases, you can build a robust communication
infrastructure that propels your organization into the future of data
processing.
Cost Of JMS (Java Message Service):
Confluent Platform:
o Confluent, the company founded by the creators of Apache
Kafka, offers the Confluent Platform with additional
enterprise features and support.
o Confluent’s pricing typically involves a subscription model
based on factors like data volume, connectors, and level of
support.
o Costs can range from a few thousand dollars per month to
higher amounts based on the specific subscription tier.
Managed Kafka Services:
o Cloud providers like AWS, Azure, and Google Cloud offer
managed Kafka services (e.g., Amazon MSK, Azure Event
Hubs for Kafka, and Google Cloud Pub/Sub with Kafka
interface).
o Pricing for managed services is usually based on factors
such as data transfer, storage, and the number of
operations.
o Costs can vary but are generally in the range of several
hundred to several thousand dollars per month, depending
on usage.
Considerations:
Conclusion
In the ever-evolving landscape of modern technology, the choice
between Java Message Service (JMS) and Apache Kafka is not just a
matter of preference but a strategic decision that shapes the efficiency
and scalability of communication within an organization. As CEOs,
CTOs, Hiring Managers, Project Managers, Entrepreneurs, and
Startup Founders navigate the intricate realm of messaging solutions,
understanding the nuances and distinctive features of JMS and Kafka
becomes paramount.
Reflecting on JMS:
JMS, with its solid foundation and maturity, has long been the stalwart
of traditional enterprise messaging. Its reliability, transactional support,
and established presence within Java-centric environments make it a
reliable choice, particularly for organizations with legacy systems and
a focus on stability. JMS continues to hold its ground in scenarios
where the emphasis is on proven solutions and interoperability within
Java ecosystems.
SELECT emp_name AS Employee, salary AS Salary FROM employee ORDER BY salary DESC LIMIT 0,1;
SELECT * FROM(
AS k
WHERE ranking=3;
Data elements need to Multiple data elements can be accessed at the same
access individually. time.
No relationship between Data is stored in the form of tables which are related
data. to each other.
Normalization is not
Normalization is present.
present.
DBMS RDBMS
It stores data in either a It uses a tabular structure where the headers are the
navigational or column names, and the rows contain corresponding
hierarchical form. values.
Data redundancy is
Keys and indexes do not allow Data redundancy.
common in this model.
Examples: XML,
Window Registry, Examples: MySQL, PostgreSQL, SQL Server,
Forxpro, dbaseIIIplus Oracle, Microsoft Access etc.
etc.
db.createCollection("posts")
]);
Fields
The following operators can be used to update fields:
Array
The following operators assist with updating arrays.
A function has a return type and A procedure does not have a return type.
returns a value. But it returns values using the OUT
parameters.
You cannot use a function with You can use DML queries such
Data Manipulation queries. Only as insert, update, select etc… with
Select queries are allowed in procedures.
functions.
A function does not allow output A procedure allows both input and output
parameters parameters.
You cannot call stored procedures You can call a function from a stored
from a function procedure.
You can call a function using a You cannot call a procedure using select
select statement. statements.
The following are the key differences between triggers and stored
procedures:
1. Triggers cannot be manually invoked or executed.
2. There is no chance that triggers will receive parameters.
3. A transaction cannot be committed or rolled back inside a trigger.
Syntax:
ALTER TABLE statement
The ALTER TABLE statement allows you to:
BEGIN DECLARE
SELECT attr_name from table_name CURSOR cur_name IS
where CONDITION; SELECT attr_name from table_name
END where CONDITION;
BEGIN
...
View :
A view is a virtual table that not actually exist in the database
but it can be produced upon request by a particular user. A
view is an object that gives the user a logical view of data
from a base table we can restrict to what user can view by
allowing them to see an only necessary column from the
table and hide the other database details. View also permits
users to access data according to their requirements, so the
same data can be access by a different user in a different
way according to their needs.
2. Cursor :
A cursor is a temporary work area created in memory for
processing and storing the information related to an SQL
statement when it is executed. The temporary work area is
used to store the data retrieved from the database and
manipulate data according to need. It contains all the
necessary information on data access by the select
statement. It can hold a set of rows called active set but can
access only a single row at a time. There are two different
types of cursors –
1. Implicit Cursor
2. Explicit Cursor
Difference between View and Cursor in SQL :
Basis of
Sr.No. Comparison View Cursor
4. Types There are two types of view Cursor has two types i.e.
Basis of
Sr.No. Comparison View Cursor
Cursor in PL/SQL :
A cursor can be basically referred to as a pointer to the
context area.Context area is a memory area that is created
by Oracle when SQL statement is processed.The cursor is
thus responsible for holding the rows that have been
returned by a SQL statement.Thus the PL/SQL controls the
context area by the help of cursor.An Active set is basically
the set of rows that the cursor holds. The cursor can be of
two types: Implicit Cursor, and Explicit Cursor.
Advantages of Cursor:
They are helpful in performing the row by row processing
and also row wise validation on each row.
Better concurrency control can be achieved by using
cursors.
Cursors are faster than while loops.
Disadvantages of Cursor:
They use more resources each time and thus may result
in network round trip.
More number of network round trips can degrade the
performance and reduce the speed.
2. Trigger in PL/SQL :
A Trigger is basically a program which gets automatically
executed in response to some events such as modification in
the database.Some of the events for their execution are DDL
statement, DML statement or any Database
operation.Triggers are thus stored within the database and
come into action when specific conditions match.Hence, they
can be defined on any schema, table, view etc. There are six
types of triggers: BEFORE INSERT, AFTER INSERT,
BEFORE UPDATE, AFTER UPDATE, BEFORE DELETE,
and AFTER DELETE.
Advantages of Trigger:
They are helpful in keeping the track of all the changes
within the database.
They also help in maintaining the integrity constraints.
Disadvantages of Trigger:
They are very difficult to view which makes the debugging
also difficult.
Too much use of the triggers or writing complex codes
within a trigger can slow down the performance.
Difference between Cursor and Trigger:
S.NO Cursor Trigger
It is a pointer which is used to control the context area It is a program which gets executed in response to
1.
and also to go through the records in the database. occurrence of some events.
The main function of the cursor is retrieval of rows The main function of trigger is to maintain the
4.
from the result set one at a time (row by row). integrity of the database.
The main disadvantage of cursor is that it uses more The main disadvantage of trigger is that they are
6. resources each time and thus results in network round hard to view which makes the debugging really
trip. difficult.
This procedure is a general guideline for creating a loop group for parsing a large set of
input records in parts. You may want to modify the procedure to include additional
processing of the records, or you may want to change the XPath expressions to suit your
business process. If processing a large number of records, do the following.
Procedure
1. Select and drop the Parse Data activity on the process editor.
2. On the General tab, specify the fields and select the Manually Specify Start
Record check box.
3. Select the Parse Data activity and click the group icon on the tool bar to create a
group containing the Parse Data activity.
4. Specify Repeat Until True Loop as the Group action, and specify an index name
(for example, "i").
The loop must exit when the EOF output item for the Parse Data activity is set
to true. For example, the condition for the loop can be set to the
following: string($ParseData/Output/done) = string(true())
5. Set the noOfRecords input item for the Parse Data activity to the number of
records you want to process for each execution of the loop.
If you do not select the Manually Specify Start Record check box on
the General tab of the Parse Data activity, the loop processes the
specified noOfRecords with each iteration, until there are no more input records
to parse.
You can optionally select the Manually Specify Start Record check box to specify
the startRecord on the Input tab. If you do this, you must create an XPath
expression to properly specify the starting record to read with each iteration of
the loop. For example, the count of records in the input starts at zero, so
the startRecord input item could be set to the current value of the loop index
minus one. For example, $i - 1
Procedure
bw.engine.enable.memory.saving.mode=true
Finally you can set it once for all for all appnodes of a server by commenting this
(before appnode/appspace creation) and adding the following line at the end of the
bwappnode.tra file :
java.property.bw.engine.enable.memory.saving.mode=true
In BWCE environment, this can be done by adding the following parameters to the
-Dbw.engine.enable.memory.saving.mode=true
Go to your application configuration, and go to the ‘Advanced’ tab of the process archive
configuration:
How to execute a Process at application start-up or shutdown in
BusinessWorks and
. Go to the Module Descriptor Overview panel of your application and click on the Create
. The Activator Process Creation panel is displayed, selected the target Process Folder and
Package, then enter a name for the Process and click Finish
Creation of an Activator Process
This part is well known while the main properties available to configure the
elements managing the input flows are exposed in TIBCO Administrator.
Flow control in Administrator
Max Job’, this property is related to a mechanism that is available since the
very first release of BusinessWorks to control incoming flows for any
transport. The ‘Max Job’ property defines the number of process instances
that can be created for a given process starter, when this limit is reached and
new events are received process instances are created, serialized and then
written to disk without being executed, then when the number of process
instances goes below the limit a pending process instance is read from the
disk, de-serialized and instantiated for execution.
his value can be defined in TIBCO Administrator and also in the XML
deployment file used by AppManage. The default value 75 can be changed
for all HTTP process starters of a given BusinessWorks engine at a time with
the following property: bw.plugin.http.server.maxProcessors
Environment
Details
Details
Resolution:
You can achieve the target by passing global variables in BW command line
arguments. After deploying a BW project, a .cmd file is created. And in this file,
there are command like this:
-"C:/tibco/bw/5.3/bin/bwengine.exe" --run --propFile "…tra"
Please also note that there is another way to do this. IF you add the GV's in the
bwengine.xml then you can
modify these values during deployment.
Products
Not Applicable
Not Applicable
Not Applicable
Summary
How to change a global variable without redeploying the application?
Environment
Details
Details
Resolution:
Environment:
===========
ALL
Resolution:
==========
1). Create a file with name “Properties.cfg” in any location.
4). Search for tibco.env.APP_ARGS and add the path of Properties.cfg created in
Step 1 with the parameter “-p” E.g. tibco.env.APP_ARGS =-p
C:/tibco/bw/5.9/bin/Properties.cfg
AND/OR
Resolution
TIBCO FTL® is a robust, high performance messaging platform for real-time enterprise
communications that provides high throughput data distribution to virtually any device.
. All this capability is delivered with a peer-to-peer architecture and API libraries that run on
commodity general-purpose systems with no specialized hardware or servers require
One-to-Many Publishing
The primary data-transfer model in TIBCO FTL software
is one-to-many publishing. A publisher produces a stream of
messages, and every subscriber to that stream can receive
every message in the stream.
One-to-Many Publishing
This model is analogous to radio broadcast: a radio station
broadcasts a stream of audio data, and every radio receiver
on the same frequency can receive that audio stream.