Managing security in Hadoop is a challenging process. Hadoop Kerberos authentication is good for any IT user or service account, but it’s not designed for hundreds of users accessing Hadoop. There are hundreds of servers on Hadoop, and managing user accounts on each server is a very cumbersome task.

For data access (authorization), there are so many data components involved in any Hadoop solution: analysis, visualization, search, hive tables and access to raw files. Again managing all these groups and permissions are very challenging and difficult to understand. This is one of the reason, organizations are not able to provide access of raw data on data lake to their business users.




shield-icon

When you have millions of files and tables in your data lake, you need a simple architecture to manage the security. You should also aim for your hundreds of business users to access data from the Hadoop data lake, and to make data-driven decisions and ask questions without worrying about a security lapse.



This is where core-security comes into play

OvalEdge provides a role based form of core-security. In role-based core-security, a user can be assigned to multiple roles and each role can be assigned to various data objects. Core-Security further simplifies the role based security, where any role is assigned at the database level or the parent-level folder. After that, everything flows automatically. Note that if the data goes to search, it inherits this security; and if you analyze the data and create a new dataset, this new dataset calculates security roles automatically. So wherever data is flowing, security roles are also propagating automatically. Off-course system admins can change the policy whenever they want, at any level.

OvalEdge also provides the security at a Column level of any table. You can choose to mask or restrict any column. Once you mask any column, users can not see that column but can use in their analysis in joining grouping etc.