Self-Service Data Preparation for Hadoop
(12-07-2016)

Deel dit bericht

Organizations know they can use big data for analytics and other advanced technologies, so they’ve harnessed and stored it in systems like Hadoop. But while storing data is one thing, the ability to manage it is quite another.

Accessing data in Hadoop requires code that’s difficult to create and maintain, creating a gap in the skills needed to manage it. And if you can’t get to the data you need, it has no more value than if you hadn’t collected it in the first place. Your options? Depend on someone from IT, learn to code yourself or find a solution that bridges the gap.
In this live webinar, our in-house experts will review and demonstrate the key capabilities of SAS® Data Loader for Hadoop. You’ll see its intuitive interface for profiling, managing, cleansing and moving data in Hadoop, so you can manipulate data without knowing how to code.

We’ll discuss and demonstrate:
- Big data integration and access, including loading relational data to and from Hadoop and self-service access to profile, transform and filter data on Hadoop.
- Big data quality – the profiling, parsing and standardization of data by pushing processing down to Hadoop.
- Visualization and analytics, including running SAS code on Hadoop and loading data in memory for visualization using the SAS LASR Analytic Server.

Featured speakers
Benjamin Gani, Data Management Specialist, SAS
Derek Hardison, Global Systems Engineer, SAS

Tuesday, July 12, at 2 p.m. ET

Register

Company:

SAS

Partners