Developing an Entitlements Management Approach

We were sitting down with a client during some initial prioritization discussions in an Identity and Access Management (IAM) Roadmap effort, when the talk turned to entitlements and how they were currently being handled.  Like many companies, they did not have a unified approach on how they wanted to manage entitlements in their new world of unified IAM (a.k.a. the end of the 3 year roadmap we were helping to develop).  Their definition of entitlements also varied from person to person, much less how they wanted to define and enforce them.  We decided to take a step back and really dig into entitlements, entitlement enforcement, and some of the other factors that come into play, so we could put together a realistic enterprise entitlement management approach.  We ended up having a really great discussion that touched on many areas within their enterprise.  I wanted to briefly discuss a few of the topics that really seemed to resonate with the audience of stakeholders sitting in that meeting room.

(For the purpose of this discussion, entitlements refer to the privileges, permissions or access rights that a user is given within a particular application or group of applications. These rights are enforced by a set of tools that operate based on the defined policies put in place by the organization.  Got it?)

  • Which Data is the Most Valuable?- There were a lot of dissenting opinions on which pieces of data were the most business critical, which should be most readily available, and which data needed to be protected.   As a company’s data is moved, replicated, aggregated, virtualized and monetized, a good Data Management program is critical to making sure that an organization has handle on the critical data questions:
    • What is my data worth?
    • How much should I spend to protect that data?
    • Who should be able to read/write/update this data?
    • Can I trust the integrity of the data?
  • The Deny Question – For a long time, Least Privilege was the primary model that people used to provide access. It means that an entitlement is specifically granted for access and all other access is denied, thus providing users with exact privilege needed to do their job and nothing more.  All other access is implicitly denied.  New thinking is out there that says that you should minimize complexity and administration by moving to an explicit deny model that says that everyone can see everything unless it is explicitly forbidden.  Granted, this model is mostly being tossed around at Gartner Conferences, but I do think you will see more companies that are willing to loosen their grip on the information that doesn’t need protection, and focus their efforts on those pieces of data that are truly important to their company.
  • Age Old Questions – Fine-Grained vs. Coarse-Grained. Roles vs. Rules. Pirates vs. Ninjas. These are questions that every organization has discussed as they are building their entitlements model.
    • Should the entitlements be internal to the application or externalized for unified administration?
    • Should roles be used to grant access, should we base those decisions on attributes about the users, or should we use some combination?
    • Did he really throw Pirates vs. Ninjas in there to see if we were still paying attention? (Yes.  Yes, I did).

There are no cut and dry answers for these questions, as it truly will vary from application to application and organization to organization.  The important part is to come to a consensus on the approach and then provide the application teams, developers and security staff the tools to manage entitlements going forward.

  • Are We Using The Right Tools? – This discussion always warms my heart, as finding the right technical solution for customers IAM needs is what I do for a living. I have my favorites and would love to share them with you but that is for another time.  As with the other topics, there really isn’t a cookie cutter answer.  The right tool will come down to how you need to use it, what sort of architecture, your selected development platform, and what sort of system performance you require.  Make sure that you aren’t trying to make the decisions you make on the topics above based on your selected tool, but rather choose the tool based on the answers to the important questions above.

Oracle Identity and Access Management with EM12c: Red Pill or Blue Pill?

It seems all too often that when users are unable to access an end-user business function protected by a IDAM (Identity and Access Management) solution, the IDAM system gets the brunt of the blame and in a lot of cases without justification. Today’s corporate web based business functions are comprised of complex systems based on a service oriented applications.  As such, it can be difficult to diagnose particular issues in a timely manner to preclude having to restart several components. As the issue persists, security controls may be removed or bypassed all together resulting in another set of problems. In many cases the root cause does not get identified and a repeat incident occurs.

Example Use Case

Consider a system that hosts a web application providing an end-user business function to allow users to sign up for service and be able to pay their bills online. To protect the web application, an Oracle IDAM system, referred to as the SSO Stack, is implemented to provide access control and data protection for the end-users. As you can see, there are a lot of complicated flows and dependencies in the systems.

TomBlogFigure1

Suppose an issue has been reported by an end-user and technical support personnel are logged in to try and resolve the issue. To illustrate the complexity of the issue, suppose an end-user cannot access the system to pay their bill. Without having an in-depth knowledge of what is going on inside the systems, it is difficult to determine if the web application is the problem or if the problem is related to the SSO Stack. If it is the SSO Stack, which component is at fault?

Remember the movie, the matrix, “take the red pill” and find out what is really going on in the matrix. “Take the blue pill” and you live in ignorance and bliss. When troubleshooting systems, the tendency is to: collect and analyze logs on each of the system components independently, trouble-shoot at the network level, and execute manual user tests, all time consuming. How many times have you heard someone say “I can ping the server just fine” yet the problem persists.

TomBlogFigure2

“What if I told you”, testing at the application layer provides a more accurate indication of what is really going on inside the system. The business functionality is either working as intended or it is not.  Applications performing the business functions can be modeled as services and tested in real-time. Service tests can measure the end-user’s ability to access a service and if automated, allow issues to be resolved before end-user complaints start rolling in. Service tests strategically placed in each critical subsystem can be used as health checks determining which system component may be at fault if there are reported issues.

EM12c Cloud Control Service Model

With EM12c Cloud control, business functions can be modeled as services to be monitored for availability and performance.  Systems can be defined based on target components hosting the service. As a service is defined, it is associated with a system and one or more service tests. Service tests emulate the way a client would access the service and can be set up using out-of-the-box test frameworks: web testing automation, SQL timing, LDAP, SOAP, Ping tools, etc. and can be extended through Jython based scripting support.  The availability of a service can be determined by the results of service tests or the system performance metrics. The results of the system metrics can be utilized in system usage metrics and in conjunction with service level agreements (SLAs). Additionally, aggregate services can be modeled to consist of sub-services with the availability of the aggregate service dependent on the availability of each of the individual sub-services.

Example Use Case Revisited with EM12c Service Model

Revisiting the issue reported in the previous use-case, it was not a trivial task in determining whether it was or was not an SSO issue and which component or components were at fault.  Now consider modeling the consumer service and running web automation end-user service tests against the web application. Consider the SSO stack as a service modeling the Identity and Access Management functionality. The SSO Stack can be defined as an aggregate service with the following subservices: SSO Service, STS Service, Directory Service and Database Service. The availability and performance of the SSO Stack can be measured based on the availability and performance of each of the subservices within the SSO Stack chain. Going back to the problem reported in fig 1, the end-user could not access the web application to pay their bill. Suppose service tests are set up to run at the various endpoints as illustrated in figure 3.  As expected, the end-user service tests are showing failures. If the service tests for the Directory Service and Database are passing, it can be concluded the problem is within the OAM server component. Looking further into the results of the SSO Service and STS Service the problematic application within the OAM server can be determined. As this illustration points out, Service tests provide a more systematic way of trouble shooting and can lead you to a speedier resolution and root cause.

TomBlogFigure3

Em12c Cloud Control Features

The following are some of the features available with the EM12 Cloud Control monitoring solution to provide the capabilities as mentioned not available from the basic Enterprise Manager Fusion Middleware Control.

  1. Service Management:
    1. Service Definition: Defining a service as it relates to a business function. Modeling services from end-to-end with aggregate services.
    2. Service tests: Web traffic, SOAP, Restful, LDAP, SQL, ping etc. to determine end-user service and system level availabilities and performance.
    3. System monitoring. Monitoring a group of targets that represent a system that is intended to provide a specific business function.
    4. Service level agreements (SLAs) with monitoring and reporting for optimization.
  1. Performance monitoring
    1. Defining thresholds for status, performance and alerts
    2. Out-of-the-box and custom available metrics
    3. Real-time and historical metric reporting with target comparison
    4. Dashboard views that can be personalized.
    5. Service level agreement monitoring
  1. Incident reporting based on availability and performance threshold crossing, escalation and tracking from open to closure. Can also be used to track SLAs.
  2. System and service topology modeling tool for viewing dependencies. Can help with performance and service level optimization and root cause analysis.
  3. Oracle database availability and performance monitoring:
    1. Throughput transaction metrics on reads, write and commits
    2. DB wait time analysis
    3. View top SQL and their CPU consumption by SQL ID.
    4. DBA task assistance:
      1. Active Data Guard and standby Management
      2. RMAN backup scheduling
  • Log and audit monitoring
  1. Multi-Domain management: Production, Test, Development with RBAC rules. All domains from one console.
  2. Automated discovery of Identity Management and fusion middleware Components
  3. Plug-ins from 3rd party and developer tools with Jython scripting support to extend service tests, metrics etc.
  4. Log pattern matching that can be used as a customizable alerting mechanism and performance tool.
  5. Track and compare configurations for diagnostics purposes.
  6. Automated patch deployment and management.
  7. Integration of the system with My Oracle Support

As a final note and why it is referred to as EM12 Cloud Control

One of the advanced uses of Oracle Enterprise Manager 12c is being able to manage multiple phases of the cloud lifecycle—such as the planning, set up, build, deployment, monitoring, metering/chargeback, and optimization of the cloud. With its comprehensive management capabilities for clouds, Oracle Enterprise Manager 12c enables rapid deployment and end-to-end monitoring of infrastructure as a service (IaaS), platform as a service (PaaS)—including database as a service (DBaaS), schema as a service (Schema-aaS), and middleware as a service (MWaaS).

What is Single Sign-On?

As I was preparing for Gartner’s Identity and Access Management conference next week in Las Vegas, I was thinking about some of the typical topics that attendees usually ask us about.  There are always the people that want more information about the sexy, cutting edge topics like the Internet of Things, Privileged Identity Management and Adaptive Access Control.  I love talking about these subjects as they are new and involve interesting problems.  Solving interesting problems is fun and the reason many of us got into the information security field.

Another topic that frequently comes up isn’t quite as sexy or fun but really is a foundation function for a mature IAM system:  What is Single Sign-On (SSO)? It seems like SSO is viewed by many as a commoditized feature these days but a surprising number of organizations are still in the early stages of investigating SSO and what it might mean for them.

When explaining SSO to someone, I used to lead off by trying to break the news that they are really never going to have 100% single sign-on but as more and more legacy desktop fat client applications become web-enabled it is much more likely that they might be able to approach a true single sign-on.  These days I just get into a quick overview of what SSO means across a variety of different use cases.

  • Web-Based Single Sign-On – The most commonly recognized type of SSO is the sharing of credentials and user sessions across a common set of internally managed web applications. These can be things like Oracle e-Business Suite applications, portals and most other non-Software-as-a-Service (SaaS) web applications.  A user will be authenticated when the system validates username and password (plus additional factors in some cases).  They are given a session token in the form of a browser cookie that is validated and updated as they travel from application to application.  Usually the same Access Management system provides some level of authorization into these applications but we’re not going to get into all that entails.
  • Federation – Federation is a standards based method of authenticating users into applications hosted by a third party, also called Cloud-based or Saas. Think of SalesForce.com or any of a variety of Oracle’s Cloud applications. There are two sides to a federated agreement: Service Provider controls the actual application, and Identity Provider controls the user IDs and passwords.  The session token is typically a SAML assertion that is consumed by both parties and includes all of the relevant user information.  These SAML assertions can typically be consumed by the Access Management system that provided SSO for the internal applications, allowing users to seamlessly move from application to application regardless of where that application is hosted. (As an aside, when you hear Identity as a Service (IaaS) tossed around, typically is refers to a federation model when you still control your account information but the IaaS is used to broker application access via federation.)
  • Windows Native Authentication – This is the bridge to true SSO by allowing the Access Management system to integrate with a Windows domain to provide a seamless experience. A user will authenticate into their domain as they perform their initial login.  Once they are validated, they will received a Kerberos ticket from the domain controller that contains user and session information much like the browser token or SAML assertion.  When they launch an application that is protected by the Access Management system, the Kerberos ticket will be consumed, validated and then used as the basis to issue its own session token.
  • Enterprise SSO – eSSO, or desktop SSO, is based on agents being installed on each work station to handle the login in process for fat client and legacy applications. We don’t see this nearly as much since more and more applications are moving to the web.

An example to tie it all together – I sit down at my workstation and log in for the morning. A Kerberos ticket is issued.  I decide that I need to check the status of a customer lead in Salesforce.com so I launch a browser and go to the site. When I land on the app, it will query its Identity Provider (our Access Management system) who I am.  The Access Management system sees that I have a valid Kerberos ticket so it will create a SAML assertion and send me back to Salesforce.  This all happens behind the scenes and is usually pretty quick.  Once I am done on SalesForce, I need to go to Oracle e-Business to check on the status of an order.  I browse to the app.  The Access Management system sees that there is an active SSO session (via the SaleForce visit) and creates a new browser cookie to manage the session.  I’d be able to go between any integrated app, onsite or in the cloud, and have SSO for the duration of that session.

Obviously, this is a super-simplified version of how SSO works but I find that it gives people who don’t have a working knowledge of IAM concepts a good understanding of the functionality that is typically grouped under the SSO umbrella.

As a note, PathMaker Group typically implements SSO early in the release roadmap as it can be a quick win that shows value and progress to stakeholders.  We can get through a typical SSO project from requirements through production deployment in 3-4 months depending on scope and complexity.  Reach out to us to see how we can help you get your SSO project underway.

 

 

 

Oracle Internet Directory: Bulk Loading a Large User Base

During a planned environment migration, the need to move a large number of users quickly becomes paramount. There are several mechanisms to do this with OID, but most of them take far too long and have a huge hit on performance. Take for example, importing an LDIF file through the ODSM console. With a file containing roughly 600,000 users, this would take an average of 33 hours! Enter the bulkload tool. From start to finish, the same import takes under 10 minutes. How is that for an improvement? But, we don’t want to discuss all the benefits of the bulkload tool do we? Just show me how to do it, right? Here is how:

  1. Set ORACLE_HOME to your OID home (ex: /u01/app/oracle/Middleware/Oracle_IDM1)
  2. Set ORACLE_INSTANCE to your OID instance home (ex: /u01/app/oracle/Middleware/asinst_1)
  3. Copy your LDIF file to $ORACLE_HOME/ldap/bin
  4. Stop your OID instance
  5. Run the following command:

./bulkload connect=”oiddb” check=”TRUE” generate=”TRUE” file=”$ORACLE_HOME/ldap/bin/users.ldif”

Note: The “connect” string can be found in the tnsnames.ora file located in $ORACLE_INSTANCE/config. Despite what the documentation says, it is NOT the service name. It is the name at the beginning of the string.

  1. You will be prompted for the OID password. This is the ODS schema password.
  1. The script will build all the files it requires under $ORACLE_INSTANCE/OID/load. Basically, it builds data files for every attribute/objectclass in use. Non-indexed attributes will not be shown…but worry not, they will be written.
  1. Now, time to do the actual import. Run the following command from the same location:

./bulkload connect=”oiddb” load=”TRUE”

  1. Sit back, relax, and watch how quickly the import executes
  2. Restart OID, and connect to ODSM to verify the number of entries.
  3. Marvel at your ingenuity