This is an exciting time for technologies that enable data-driven companies. While there have been many impactful IT-facing innovations over the last 40 years—hardware virtualization, public cloud, flash drives, just to name a few—the most promising data-focused ones have only arisen in the last several years. The proliferation and maturation of big data technologies have given organizations more power to successfully leverage data to drive the operations of their business. As data-driven companies try to accomplish more and more with their huge volumes of data, they continue to explore new technologies to stay ahead.
Hadoop has been, and likely will continue to be, a key technology for managing big data. Its ability to handle a wide variety of data types, the scale-out architecture on commodity hardware, and its widespread support make it an ideal data management platform for today and the future. But Hadoop certainly does not and should not operate in isolation. Related technologies like Spark, NoSQL, NewSQL, and object stores will also play critical roles in building out modern architectures.
And while big data innovations have largely pertained to data management, focus on end users particularly regarding BI and analytics, will continue to grow. The notion of self-service, especially important when it comes to gaining competitive advantage via data agility, will emerge further as a strong theme. More powerful tools such as those that enable the “citizen data scientist” will dominate the BI/analytics landscape. And related to that, the phenomenon of “data democratization” is a key aspiration that will let companies get more value from data with less overhead.
The advent of big data technologies resulted from the recognition that traditional technologies could not efficiently handle the volume, velocity, and variety of data that is inherent in today’s business environment. This simply means that a separate class of technologies was needed to handle emerging challenges. This evolution is certainly true in the BI/analytics world. As data-driven companies seek more data, more agility, more insights, all with lower costs, a new paradigm is required. One cannot expect to use traditional technologies on big data without making significant compromises that ultimately reduce the value of that big data. With newer technologies, you will not have to make those compromises that limit your ability to compete. The industry has evolved from a traditional BI approach to new approaches such as SQL-on-Hadoop, OLAP on big data, and native visual analytics. While these newer technologies have the obvious disadvantage of a shorter track record, the proof points among current innovating companies using these technologies show the distinct operational advantages that make the investment worthwhile.
Native visual analytics is arguably the most intriguing approach because it provides more advantages for analyzing big data than the other approaches. An integrated suite of analytics and query acceleration provides benefits to both the end users as well as the IT administrators. End users get powerful, easy-to-use visualizations with faster performance, while the IT team has significantly lower overhead around deployment and data management activities. Support for complex data types lets end users analyze a much wider range of data sources, a key tenet of big data, and reduces the ETL effort on the management side. And immediate access to granular data means end users can get details, not just summaries and aggregations, without time-consuming IT intervention.
Arcadia Data is an example of a technology provider for native visual analytics. With capabilities that cater to a wide range of end users including the power users, they make the Gartner vision of “citizen data scientists” real. Visual analytics and more advanced data discovery need no longer be limited to a few senior managers using algorithms designed by data scientists. Instead, data professionals can focus on building operational programs and models based upon discoveries enabled by the people who use the data. They can also work with larger volumes of varied data types while also retaining the ability to drill down to a granular level, all within the same application. Support for a variety of big data environments including Hadoop and the cloud means that the IT team can choose the deployment model that best suits their needs while giving end users the data access they need.