A data fabric refers to an architecture and approach to data management that encompasses all data integrations in multiple data pipelines and cloud environments, one which uses smart data management and automated systems to govern the entire fabric. Data fabrics are sophisticated, and sometimes intermixed with approaches like data virtualization. However, data virtualization is but one tool used in a data fabric approach enabling the data fabric to reach out to different data sources and integrate its metadata with other data source metadata creating a virtual data layer that can be understood in real-time. Additionally, data fabrics integrate other considerations beyond the mechanical aggregation of data, such as data compliance. In this way, the data fabric is more encompassing than just ETL processes. A data fabric will also need to consider embedded governance and compliance logic, security and privacy measures, and greater data analytics to multiple end user roles.

Data fabric architecture is like a patchwork of a quilt, each tool fits together to form a specific data fabric to fulfill an organization’s needs. A company operating on a hybrid cloud data fabric, for instance, could use AWS for data ingestions, and subsequently use Azure for data transformation and consumption. These cloud providers are somewhat comparable, but not equally so. Data architects who are devising their own data fabrics must be intimately familiar with the capabilities and limitations of vendor products before tying them together.

According to Forrester, six data fabric layers constitute a framework that enables dataops.

  1. Data Management layer: The data management layer is responsible for data governance and security of data.
  2. Data Ingestion Layer: The data ingestion layer begins to stitch cloud data together, finding connections between structured and unstructured data.
  3. Data Processing: The data processing layer refines the data to ensure that only relevant data is surfaced for data extraction.
  4. Data Orchestration: The data orchestration layer critically conducts some of the most important jobs for the data fabric—transforming, integrating, and cleansing the data, making it usable for teams across the business.
  5. Data Discovery: The data discovery surfaces new opportunities to integrate disparate data sources.
  6. Data Access: The data access layer allows for the consumption of data, ensuring the right permissions for certain teams to comply with government regulations. Also, this layer helps surface relevant data through the use of dashboards and other data visualization tools.

Data fabric software is a unified data platform that integrates an organization's data and data management procedures. The following are required features of data fabric software.

  • Control and perform all data management processes under a singled unified data platform
  • Connect to and extract data from multiple disparate data sources potentially physically distant from the data fabric platform
  • Manage data across all on-premise and cloud environments
  • Provides seamless access and control of all data, data types and sources
  • Provides analytics tools and/or integrations for analytics suites
  • Capable of metadata functionality

Data fabric is used to simplify the monitoring and management of data wherever they are. This is not a new goal, however, data fabric is a new approach to achieving a fully visible unified enterprise data operations. Data fabrics leverage the best of all cloud, core, and edge environments, folding into the data fabric on-premise, private, and public clouds. From a centralized platform, teams can monitor all data sources, optimize data pipelines, and gain actionable insights into data operations.

The technical purpose of a data fabric solution is to abstract away the underlying data storage technology from typical Extract, Transform, and Load data collection processes. Several data trends are challenging traditional ETL processes, namely the exponential increase in data generation, collection, and storage. With ETL, the circumstance is that data resides in several silos, such as data lakes, data warehouses, etc. and must be extracted from those sources, transform data into usable formats, and load into a user-friendly system, like a data mart. As data continues to be automatically generated and collected by things such as cellphones, and IoT devices, more of this data remains unreachable, locked away in data centers.

The challenge to ETL tools is finding a way to automate processes more, such as data discovery, which is when the computer locates new data sources rather than an admin notifying the system, or potentially not updating the system and risk incomplete data analysis. Many ETL tools have kept up, however, some data circumstances simply are too vast for admin led ETL processes.

With a data fabric, thousands of data sources can be managed using automated data tools that simplify an organization’s data operations. Data silos that once plagued ETL strategies no longer need to do so.

Data is only useful once it has been placed in context, and then made accessible to users and applications of the company. Data fabrics implemented correctly do this. Three key business benefits of data fabrics:

  • Enable self-service data consumption — With a fully integrated data fabric, users can visit the data warehouse, dipping into the data to extract what data is useful for them at the moment. In short, data fabrics quickly deliver specific insights to those most in need of it.
  • Supports data through automated governance, data security and protection — An active governance layer, with policy enforcement, immediately ensures transparency and instills trust in data.
  • Enables automated data engineering tasks — Data fabric automations eliminate human error and provide virtually any data access or data processing. Additionally, metadata can be used to optimize data access and delivery.

Despite the ease promised by many data fabric tools, the following list marks common challenging points for data fabric implementation.

  • Deploying and Configuration of Data Fabric Services
  • Managing Dependencies between Services
  • Designing a Data Model and Creating a Data Infrastructure
  • Integrating with External Systems
  • Monitoring and Troubleshooting Data Fabrics

Data fabrics are relatively new, but given their immense capabilities and advancements, their abilities and potential use cases have yet to be fully discovered. The following are just a few use cases that data fabrics enable.

  • Innovation — Data fabrics are feature rich and can open new paths of innovation for enterprises. In particular, with the inclusion of AI and machine learning, data fabrics can accelerate data and analytics lifecycles.
  • Preventative maintenance — Data fabrics and automations can be used to conduct preventive maintenance based on data analytics of schedules, performance, and various other data points that can be calculated in predictive models.
  • Elimination data silos — One of the main purposes of data fabrics is to make visible and transparent all data sources, in effect level data silos.
  • Deep insights — Data fabrics can compile data and distinguish different sets of related data from multiple sources, often viewing it from multiple angles. Applied in areas, such as customer satisfaction, analytics can be used to enhance each individual customer’s overall experience.
  • Regulatory compliance — With the advent of automations, and data discovery, regulatory compliance has become manageable over millions of personal records, a feat not easily possible before.