The Flowcore Platform is a game-changer for developers looking to create and deploy data infrastructure with ease. It allows you to set up your data system using a simple manifest, ingest data through straightforward interfaces like webhooks or filehooks, and stream events out via a CLI, SDK, or Transformers deployed right on the platform.
Don't let the simplicity fool you, though. While Flowcore aims to make working with data as easy as possible, it doesn't skimp on scalability. All components of the platform are designed to scale linearly, ensuring your infrastructure can grow with your needs.
Here's a cool tidbit: we use the Flowcore Platform ourselves when developing the platform. Talk about practicing what we preach!
Let's break down the building blocks that make up the Flowcore infrastructure:
Think of the Data Core as the main container for your data - like a collection of databases. You'll typically structure this at the application level. For example, we keep all the data related to the Flowcore Platform's operations in a Data Core called flowcore-platform.
A Flow Type is similar to a database within your Data Core. It's a collection of data types, usually organised around specific business logic domains. In the Flowcore Platform, we have separate Flow Types for each domain, like organization
, user
, data-core
, and so on.
An Event Type is comparable to a table in a database. It's where each specific data type lives. While there's no strict schema enforced, a loose schema is derived from the data that's ingested into each Event Type. Need to know the schema? Just use the flowcore types <stream>
command, and it'll generate one for you based on the stream's contents.
Events are the heart of Flowcore. Think of them as supercharged database rows. Each event contains metadata and a payload, and they're designed to be bi-temporal (fancy word for having two time dimensions). They have a TimeUUID event ID for when they're recorded and a Valid time. These time axes can be swapped with a reclassification transformer if needed.
Once an event is recorded, it's immutable - no changing your mind later! The system automatically schedules events for backup and handles cold storage management. This means when you're streaming events, you don't need to worry about hot or cold storage - the system figures that out based on the recorded time of the events.
Transformers are where the magic happens. They're runtimes that process your event streams, running in a transformer shell. You can develop transformers for specific cases or create configurable runtimes that can be reused. We currently support NodeJS, Bun, and Python out of the box, but you can extend from our base transformer shell to use other languages if you prefer.
A strand combines an event stream and a transformer. It's like a conveyor belt for your data. For example, you might have a strand for CRUD operations on an organization, pulling events from organization.created
, organization.updated
, and organization.archived
.
Each strand can be restarted individually to any point in time, allowing you to replay events as needed.
A scenario in Flowcore is like a collection of strands or a set of services that transform or process data. For instance, we might have a scenario for the organization
business domain.
A Read Model is a database where you store processed data for specific application use-cases. It should be a database that fits your purpose. For example, we use different databases for each business domain - sometimes ArangoDB, sometimes PostgreSQL. In many cases, it makes sense to use multiple database types for the same data, as they all have different strengths. Don't worry about keeping everything in sync - Flowcore and the transformers handle that for you.
Just a heads up: at the moment, you'll need to deploy and manage your own read model deployments. But don't worry - we've got plans in the works to make this even easier. Our roadmap includes an option where we'll deploy various read models for you, so stay tuned for that!
Since we use the platform ourselves, we've picked up some best practices along the way:
We version all our Flow Types and Event Types. This allows us to increment the version for breaking changes, making it easier to reclassify data or maintain backward compatibility. For example, we use names like organization.0
and organization.created.0
.
When using Flowcore, it's beneficial to create simple services that follow the single responsibility principle. You can easily extend functionality and create decoupled services by hooking into the streams. Each service can build its own data based on the events, reducing the need for cross-service communication.
One of the best things about Flowcore is how it simplifies using production data for both development and tests. That "it just works, first try!" feeling has become pretty common for us, even when deploying multiple decoupled services that come together on the frontend or CLI. The events define the contracts we use when talking to external systems, which simplifies everything.
The Flowcore Platform offers a powerful and flexible approach to building scalable data infrastructure. By leveraging its core primitives and best practices, developers can create robust, event-driven systems that are both easy to manage and highly scalable. Here are some key takeaways:
• Simplicity with Power: Flowcore simplifies data infrastructure creation and management without compromising on scalability or functionality.
• Event-Driven Architecture: The platform's focus on events and streams promotes loose coupling and enables real-time data processing.
• Flexibility in Data Storage: With concepts like Data Core, Flow Types, and Event Types, Flowcore allows for structured yet flexible data organization.
• Scalability Built-in: All components of the platform are designed to scale linearly, ensuring that your infrastructure can grow with your needs.
• Developer-Friendly: From simple manifests for deployment to easy data ingestion through webhooks and file hooks, Flowcore prioritizes developer experience.
• Versioning Made Painless: While versioning isn't automatic (yet!), following our best practices turns what's usually a headache-inducing process into a smooth ride. You'll find yourself actually enjoying the typically dreaded task of managing data versions and maintaining backward compatibility.
• Decoupled Services: Flowcore encourages the creation of simple, decoupled services that can easily interact through event streams.
• Production-Ready Development: The ability to safely use production data in development and testing environments leads to more reliable and robust applications.
• Bi-Temporal Data Handling: With built-in support for bi-temporal data, Flowcore enables sophisticated time-based data analysis and management.
• Automated Data Lifecycle Management: Features like automatic backup scheduling and cold storage management reduce operational overhead.
By adopting the Flowcore Platform, development teams can focus on building business logic and creating value, rather than wrestling with complex data infrastructure. Its event-driven nature, combined with powerful primitives and best practices, makes it an excellent choice for organizations looking to build modern, scalable, and maintainable data systems.