We made a lot of design adjustments to the user interface for a more modern and pleasant look.
- Many design improvements.
- Mac users could experience icon display errors with Safari.
- Some users experienced problems activating their licenses.
- Update notifications are now displayed more eye-catchingly.
- In some cases, it could happen that consumer groups were not displayed correctly or not at all.
- Flow View now provides an option to show inactive consumer groups.
We heard you: the size of the topic list/stream list is now resizable and long names are no longer truncated.
Fixes and improvements:
- After clicking on a topic/stream, the search input in the topic list/stream list is no longer cleared.
- For unsaved changes to a view in the Data Browser (e.g. filters, codecs, …) a save button is now displayed to make the relationship between changes and the concept of views more transparent.
- Fixed some layout errors for KaDeck Web in connection with Firefox.
- When activating the licenses via the online mode, problems could occur despite a valid license.
KaDeck 3.1. – the biggest update yet with over 30 new features
Our goal for this update was to provide all the tools for the daily work with both Apache Kafka and Amazon Kinesis.
Therefore, Schema Registry, Kafka Connect, and ACLs can now be managed in KaDeck.
In addition, our powerful data transformation and filtering engine has become faster and gets many new features such as NodeJs V4 support.
Schema Registry Management
View, manage and evolve schemas or create new ones. Supports Protobuf, Avro, and JSON Schema.
Manage Kafka Connect
Create new Kafka Connect instances, monitor and manage Kafka Connect tasks.
View, manage and create ACLs comfortably with the included ACL Wizard.
Quick Processor - powerful data transformation and filtering
You can now even modify record headers and use NodeJs (V4) libraries.
Over 30 features and improvements
KaDeck 3.1.1 comes with over 30 new features and improvements and over 20 bug fixes!
- NodeJs V4 support
- Change record headers
- Significant performance improvements (min. 15% faster!)
- New connection wizard for Confluent Cloud and Apache Kafka properties
- Redesigned and improved user interface for creating new connections
- Skip hostname verification for Schema Registry
Data Browser & Stream/Topic list
- Faster loading of streams/topics
- Significant speed increase when displaying and loading large amounts of data sets in fractions of a second
- Mark streams/topics and their views as favorites
- Group records by their key so that only the newest one is displayed
- Full-text search for displayed records
- Improved interface makes spotting active filters, running Live Mode, and overall day-to-day work much easier
- Filtering by specific attributes of a JSON or Avro dataset now respects nesting
- As a filter option contains or not contains has been added
- Select various delimiters for exporting records to CSV
- Data sets with larger numbers (longer than 53bits) are now correctly displayed in the UI and can also be created via the ingestion dialog
Stream Details / Topic Inspector
- Change the offset of consumer groups or individual consumers per topic or for all topics
- Significant performance improvement
- Inactive consumer groups are now displayed
- Search for specific consumers by name
- See the maximum and minimum offset lag in the consumer group overview
- Embedded cluster connection is hidden if no embedded cluster is configured
- New Feedback dialog – let us know your feedback and rate us!
- Many UI changes, more help menus
- Many bug fixes
- The stream search bar was not fully displayed when multiple streams were displayed in the list
- HttpClient has been updated to 4.5.13, fixing a bug with Wildcard SSL certificates
- The layout of the profile view was slightly shifted
KaDeck 3.0 – a big leap.
Amazon Kinesis Integration, Live Mode, Multi-Select, Stateful Quick Processor, CSV Export, and much more.
Full Amazon Kinesis Integration
Manage streams, transform & ingest data or monitor and analyze data in real-time with KaDeck and Amazon Kinesis.
See how data flies in as it’s produced and apply transformations and filtering in real-time.
Select multiple records for ingestion or export.
Data correction example
Transform erroneous records with the Quick Processor and send them back to their original stream.
Stateful Quick Processor
Calculate moving averages or create aggregates by using the new state store.
Export your KaDeck view to a CSV file – you know.. why not?
And much more
KaDeck 3.0 comes with many improvements:
- Add all JSON/Avro data attributes as columns with a single click.
- Use the SHIFT key to select multiple records if multi select is enabled.
- If you add more columns than will fit on your screen, a horizontal scroll bar is now displayed in the data browser.
- Stream list is now alphabetically sorted by default.
- The stream list is now cached. The cache can be configured in the settings.
- When hovering over a stream in the stream list, a context menu is displayed with detailed information.
- Query results are now streamed to the frontend – you no longer need to wait for a query to complete.
- When the maximum number of data sets loaded into the UI is reached, the oldest data sets are gradually dropped.
- And as always: many bug fixes.
The version 2.2.1 update for KaDeck comes with several improvements for the Avro codec and includes a reworked topic inspector screen.
- Ingestion: It is now possible to specify the schema to use for the ingestion of your Avro record.
- Nullable union types do now include the type in the representation in the record list. This behavior can be switched off in the settings.
Delete records in topics
- Records inside topics can now be deleted. Specify the partition and offset or wipe a topic completely from the topic list (right-click -> wipe topic).
- The topic inspector was fully reworked.
- Total consumer lag and consumer lag of each consumer are now displayed.
- Topic configurations can now be viewed and edited.
The version 2.1.3 update comes with several improvements around codecs (ingestion & consumption) and connectivity.
- Use the search to quickly find the right records. It is now no longer necessary to enclose a character string in quotation marks in the search. That means you can now search for your records by typing *needle, needle* or *needle* in the search field. Of course you can also still use regex.
- JSON consumption (decoding): all JSON value types (Arrays, Objects, Numbers, Strings, Booleans, …) are now correctly decoded and not longer enclosed by quotation marks.
- JSON ingestion (encoding): all JSON value types (Arrays, Objects, Numbers, Strings, Booleans, …) are now correctly encoded.
- AVRO ingestion (encoding): added support for primitives.
- The ingestion dialog now displays the recognized type of JSON value to make ingesting without switching to the JSON View tab much easier.
- Strings no longer have to be enclosed in quotation marks.
- If an error occurs during connection setup, additional information is now displayed to simplify troubleshooting.
- The verification of the rights required by KaDeck was extended.
Support for Kafka 2.5.0
- The version number of the Kafka client has been changed to 2.5.0
The version 2.1.2 update includes several bug fixes and improvements.
Become a KaDeck Insider!
We have also launched our KaDeck Insider program: as a KaDeck Insider you will be the first to get the latest functionalities for testing, be in direct contact with our team and benefit from many other advantages. Go to kadeck.com/insider to register.
- It is now possible to let Apache Kafka automatically decide in which partition the new record should be created.
- When creating a new record based on an existing one, the target partition is also copied.
- Fix: the target partition was ignored in certain situations.
Custom codec loading
- Fix: Custom codecs were not loaded correctly on Linux and Unix.
Connecting to clusters
- Not all access rights are required to establish a connection with KaDeck anymore. If these “optional rights” (e.g. describe cluster, describe consumer groups) are not available, KaDeck can now be used with limited functionality.
- A new check of the access rights has been added: Describe consumer groups. If consumer groups cannot be described, a message is displayed. Without this right you can still use KaDeck, but not all functions are available.
- Many UX improvements
- An embedded cluster connection card is now automatically created on first startup as it was not clear that you need to create a connection yourself in order to connect to the embedded cluster.
- When starting the embedded cluster the new corresponding server connection is automatically activated.
- Changed the default values of certain settings, mainly increasing timeouts and limits.
- Settings “Data Scan Max Records” and “Data Processing Limit” were renamed to “Data Scan Limit” and “Data Display Limit”. Additionally, descriptions were added.
- Cluster connections are now sorted based on their name.
- Added a hint on the record list screen that explains why you might not see all records because of a customizable application-wide “data display limit” and “data scan limit” setting.
- And many more minor improvements
The version 2.1 update includes many new features and important improvements. As always, we hope you enjoy reading about our new features and can’t wait to receive your feedback.
We have completely revised our codecs.
- Avro Codecs
Especially our Avro codecs now support more data types and make use of the new metadata, which a codec can now generate additionally. Namespace, Documentation, and Schema ID are now displayed in addition to the decoded value in KaDeck.
- Custom Codecs (Built your own)
The codec interface has also changed a lot: the methods have been extended by more parameters. They now include information about the headers of a consumer record, the partition, and much more. We have overloaded the methods for compatibility reasons so that the new interface is backward compatible with custom codecs that still use the old interface (before 2.1). A codec can now also add metadata. Our new Avro codec uses this feature to inform the user about additional information about the Avro schema.
The new metadata works both ways: for decoding and encoding! For encoding (during ingestion), the new meta fields can be used to provide additional information to the codec. This enables entirely new application scenarios. We are looking forward to hearing your feedback on this!
- New CSV Codec
We’ve added a new CSV codec, which now displays records containing a table row in CSV format as JSON with the different columns. Records with multiple rows are also displayed correctly. In combination with our Quick Processor, you can even name these columns as you wish.
Click on the image to enlarge.
In version 2.1. the possibilities for the creation of new data sets have been extended.
- Define your target
Records can now be copied from one topic to another topic of your choice. It is also now possible to specify the exact partition.
Each record can be supplemented with additional meta information in the JSON view during creation (keyMeta and valueMeta attribute fields). Thus, further information can be transferred to your own codecs, and entirely new application scenarios can be implemented.
- KaDeck includes AdoptOpenJDK JRE 11 to work out of the box. Configuring JAVA HOME on your machine is no longer required.
- More detailed error messages when connecting to a server: the exception is now displayed in the dialog. The individual connection steps are now also written in the log file to simplify troubleshooting.
- UI changes
New configuration options have been added to the settings screen:
- Poll Timeout (ms)
The maximum time to block during polling. Increase this timeout if your cluster or connection is slow, so KaDeck will wait longer for the cluster to return some records.
- Data Scan Max Records
The maximum number of records that are being scanned before filtering and processing them.
- Data Process Limit
The maximum number of records that are being processed after filtering.
Using these two settings in combination, it is possible to fine-tune the number of records that can be scanned and processed based on the machine’s memory that KaDeck is running on.
Example: If you want to filter for specific records within millions of records, you can now set Data Scan Max Records to a high value and set Data Process Limit according to your machine’s memory to limit the maximum number of matches. This also prevents KaDeck from crashing if your machine is not able to handle a high number of matches.
- If the connection to a cluster is not possible, the underlying error is now shown in the error dialog.
- Added help buttons to the cluster connection dialog, embedded cluster dialog, and topic browser.
- Server browser was renamed to Cluster overview.
- KaDeck Professional was renamed to KaDeck Desktop.
- Many smaller improvements.
Help Center Article: How to connect to Apache Kafka
- We have added a help center article that covers how to configure SSL and other connection details, including a step by step guide to connecting to Confluent Cloud.
- Increased the default poll timeout to one second and made this setting configurable. This fixes a bug that some users were experiencing that resulted in no records being displayed every now and then when the connection to the broker was slow.
- A problem with Jigsaw and custom codecs prevented KaDeck from initializing the codecs and connecting to a cluster.
Some minor bug fixes. A problem with OpenJDK that prevented activation of the product license was solved.
The version 2.0 update includes many new features and major improvements. Many of the features we have added in this release strategically pave the way for our long term vision. We wish you much fun exploring the new features, especially the new Quick Processor, and can’t wait to get your feedback.
The QuickProcessor is a powerful new feature in KaDeck 2.0 that enables two new ways of processing data sets:
- Transformation / Mapping
You can also transform data sets as you wish. This includes changing existing attributes, adding completely new attributes or transforming data into a completely different data format.
Both modes can be combined and can be applied live to the data in KaDeck and saved with a “view”. This enables completely new use cases and thus allows, for example, the generation of reports for business departments which contain complex business logic.
The way of being able to modify the data “live” is also extremely efficient and offers an ideal foundation for rapid prototyping.
Flow View & Time Graph
With KaDeck 2.0 two new visualizations have been added:
- Flow View
With the Flow View, the data flow from data producers to data consumers can be visually traced. The lag of a consumer is also displayed and producers can be named and individually identified.
- Time Graph
The time graph displays the time distribution of the data sets as a bar chart. This allows you to see at a glance the concentrations of critical data or failures in deliveries over a specific period of time.
- Added broker metrics to the server’s detail page.
- The time window limit can now be fully customized.
- Added Topic Inspector view for inspecting consumer offsets.
- SSL Endpoint identification algorithm can now be set in the settings.
- The desktop applications are now portable executables.
- KaDeck is now invisible for the Apache Kafka broker.
- Many performance optimizations and bug fixes.