The cloud is arguably the most transformational of modern technologies, revolutionizing business operations and business intelligence. Every aspect of the analytics pipeline has been affected - from cloud data storage and cloud data management strategies to data gravity pulling analytics into the cloud - with plenty of advantages for those embracing cloud analytics.
1. Flexible cloud data storage helps evolve the data lake
Our modern age of digital transformation continues to introduce new sources of data with unprecedented amounts of output. Continuous data—from clickstreams, server logs, social networks, video games, and sensor readings—is often raw or minimally structured. From an economic and performance standpoint, traditional enterprise data warehouses (EDWs) simply cannot keep up with these data tidal waves.
What’s a data lake?
A data lake is a large data repository that allows analytical tools to connect to raw data as it is, instead of forcing the data to fit a certain format first. Data lakes support modern big data analytical requirements through faster, more flexible data ingestion and storage, allowing a variety of unstructured data analyses. Hadoop has been used for data lakes due to its low cost, scale-out data storage, parallel processing, and clustered workload management. However, on-premises deployments lack the ability to scale resources based on consumption, making them expensive and inefficient.
How has cloud data storage evolved data lakes?
By decoupling storage and compute services, the cloud introduced revolutionary elasticity. Previously co-located, storage and compute services can now scale independently as needed. With attractive, on-demand pricing, you can also scale resources up and down a lot more easily. This makes ingesting, storing, and processing data much more cost-effective in the cloud—which is exactly why cloud solutions are so critical in enabling flexibility for most modern big data analytics platforms.
2. Cloud data management influences modern analytics pipelines
Data sources are constantly increasing in volume, complexity, and diversity. The days of bringing everything into a single data warehouse for analysis are long gone—not all data questions within an organization can be answered from any one data source. In the real world, many business problems require both data granularity and query speed from one or multiple sources, but at different periods and varying successions to complete a data project.
How has the flexibility of the cloud changed data management?
Advances in cloud data management have enabled new ways to approach data flows to satisfy the complex needs of organizations. Fundamentally, this means a shift from the “bucket” mentality of EDWs to more of a “pipeline” mentality—the modern data environment no longer needs to be centralized around a single location. In the cloud, you can spin up infrastructure and services for pipeline/ETL projects in hours. Coupled with optimized database engines for different query loads, the now-ubiquitous cloud solutions offer plenty of flexibility to help organizations move, clean, and access their data in new ways.
With a modern BI platform that offers the ability to connect to any data anywhere, everyone can take advantage of data regardless of its format or where it is stored. This often includes ends users connecting to data directly from cloud applications. IT can even maintain a middle layer of authorization and governance through proxy connection scenarios that satisfy needs from basic user access to highly involved business logic.
3. Data gravity pulls analytics to the cloud
What is data gravity?
Data gravity is the idea that applications and services are likely to be pulled toward where the data is stored. Data, applications, and services all have their own “gravitational pull” affected by mass, request loads, latency, and bandwidth—but data has the most mass and therefore a great influence on the location of applications and services. When these entities are closer to one another, latency is lower and throughput higher. Decreasing latency and increasing throughput returns your queries faster, allowing you to get to your analysis and answers faster.
How does data gravity affect cloud migration and analytics strategies?
Some organizations are moving their data from on-premises to the cloud. Others are transitioning infrastructure to cloud platforms. Often they’re doing both simultaneously. And even more are born in the cloud and run exclusively on web applications and cloud-native data. Many organizations use cloud applications that host their most important data, like Google Analytics, Salesforce, NetSuite, Zendesk, and others. These applications are a core part of their infrastructure—and with so much data gravity in the cloud, analytics often follows.
Remember that cloud services are there to support your business, not to be an all-or-nothing solution. So if your data is stored across cloud and on-premises, you’ll need a hybrid solution that connects to data wherever it lives. Many companies today are using a hybrid approach to storage and analysis of on-premises and cloud data for that very reason.
4. The cloud opens new possibilities for business intelligence
Overall, the cloud makes for greater efficiency, management, and coordination of services. And data is being generated and stored in the cloud for the same reasons many technologies are moving to the cloud in the first place: lower overhead, fast startup time, and infinite scalability. Today, we see those same advantages accelerating modern BI in the cloud.
The cloud enabled everything “as-a-service” from infrastructure to software applications, including fully hosted cloud analytics. Removing the need to configure servers, manage software upgrades, and scale hardware capacity not only means IT professionals can focus more on strategic priorities, but many organizations find hosted cloud solutions actually decrease the total cost of ownership for infrastructure and many business processes, including analytics.
One of the greatest benefits of analytics in the cloud is the ability to try things quickly at much lower costs. There's not a lot of setup required as there has been in traditional models, nor are there the same concerns around storage limits, cluster overhead, or performance. This gives users the freedom to try things, fail quickly, and move on to something else. You don't have to know where you're going—you have the freedom to explore, discover, and modernize your approach to BI.
The cloud has also made securely accessing data across the enterprise much easier. This is huge for today’s modern, self-service approach to business intelligence. Historically, business data was locked up in an on-premises installation. With data and analytics in the cloud, you have a secure way to access that data without necessarily requiring people go through a VPN. Connecting mobile devices to the cloud is not just easier, but typically more secure than many on-premises deployments, which makes it that much more convenient for anybody to access the right data to help them make decisions.
Spencer Czapiewski is Marketing Content and Editorial Manager at Tableau.
20 en 21 mei 2019 Praktisch tweedaags seminar met internationaal gerenommeerde spreker Mike Ferguson over het opzetten van een Enterprise Data Lake. Het seminar wordt ondersteund met praktijkvoorbeelden en duidelijke, herbruikbare richtlijnen. In dit...
21 en 22 mei 2019 Correcte informatie die in de juiste vorm en op het gewenste moment beschikbaar is lijkt een vanzelfsprekendheid. Dit doel kan alleen worden bereikt met een consequent beleid, dat doordacht alle fases van de levenscyclus van informa...
23 mei 2019Een verhelderend seminar waarin kritisch en onafhankelijk de populaire technologieën voor het opslaan en bewerken big data onder de loep worden genomen door datawarehouse- en databasespecialist pur sang Rick van der Lans. Alle nieuwe ...
3 en 4 juni 2019Praktische tweedaagse workshop met internationaal gerenommeerde spreker Alec Sharp over het modelleren met Entity-Relationship vanuit business perspectief. De workshop wordt ondersteund met praktijkvoorbeelden en duidelijke, herbruikb...
4 en 5 juni 2019 Het Logical Data Warehouse, een door Gartner geïntroduceerde architectuur, is gebaseerd op een ontkoppeling van rapportage en analyse enerzijds en gegevensbronnen anderzijds. Een flexibelere architectuur waarbij sneller nieuwe g...
19 - 21 juni 2019Praktische driedaagse workshop met internationaal gerenommeerde trainer Lawrence Corr over het modelleren Datawarehouse / BI systemen op basis van dimensioneel modelleren. De workshop wordt ondersteund met vele oefeningen en praktijk...
6 en 7 november 2019 De wereld van business intelligence en datawarehousing hanteert een unieke terminologie en eigen verzameling technologieën, ontwerptechnieken en producten. Voor nieuwkomers kan dit overweldigend overkomen. Want wat betekenen al ...
12 november 2019 Praktische workshop Datavisualisatie en Data-driven Storytelling. Hoe gaat u van data naar inzicht? En hoe gaat u om met grote hoeveelheden data, de noodzaak van storytelling, data science en de data artist? Lex Pierik behandelt...