Top Six Things to Consider with an Identity-as-a-Service (IDaaS) Solution – Blog 5 of 6

5. Robust Access Policies and Multi-factor Authentication (MFA)

 

Centrify LogoToday you live with the risks of users accessing many more services outside the corporate network perimeter as well as users carrying many more devices to access these services. Users have too many passwords and the passwords are inherently weak. In fact passwords have become more of an impediment to users than they are protection from hackers and other malevolent individuals and organizations. In short, in many cases, passwords alone cannot be trusted to properly and securely identify users.

Consequently, you need a better solution that incorporates strong authentication and one that delivers a common multi-factor experience across all your apps — SaaS, cloud, mobile, and onpremises. The solution also needs to have access policies that take into account the complete context of the access request and helps to overcome these new security risks. In addition, you need the capability to establish flexible access policies for each app for more granular and adaptive control. For example, if a user is accessing a common app from a trusted device on the corporate network from his home country during business hours ,then simply allow him silent SSO access to the apps. But if that same user is accessing an app outside the corporate network from a device that is not trusted, outside of business hours, and from a foreign country then deny them access — or at least require additional factors of authentication.

Specifically, you need an IDaaS solution that ensures security authentication by combining multi-factor authentication (MFA) and rich, flexible per-app authentication policies.

Multifactor authentication methods should include at least:

• Soft token with one-button authentication to simplify the experience
• One Time Passcode (OTP) over SMS text or email
• Interactive Phone Call to the user’s mobile device and requirement for a confirmation before authentication can proceed
• User configurable security question to act as a second password

Per-app authentication policies should allow, deny or step up authentication based on a rich understanding of the context of the request based on any combination of:

• Time of day, work hours
• Inside/Outside corporate network
• User role or attributes
• Device attributes (type, management status)
• Location of request or location of user’s other devices
• App client attributes
• Custom logic based on specific organizational needs

Meeting IAM Gaps and Challenges with New Product Offerings

PathMaker Group has been working in the Identity and Access Management space since 2003.  We take pride in delivering quality IAM solutions with the best vendor products available.  As the vendor landscape changed with mergers and acquisitions, we specialized in the products and vendors that led the market with key capabilities, enterprise scale, reliable customer support and strong partner programs.  As the market evolves to address new business problems, regulatory requirements, and emerging technologies, PathMaker Group has continued to expand our vendor relationships to meet these changes.  For many customers, the requirements for traditional on premise IAM hasn’t changed.  We will continue supporting these needs with products from IBM and Oracle.  To meet many of the new challenges, we have added new vendor solutions we believe lead the IAM space in meeting specific requirements.  Here are some highlights:

IoT/Consumer Scalability

UnboundID offers a next-generation IAM platform that can be used across multiple large-scale identity scenarios such as retail, Internet of Things or public sector.  The UnboundID Data Store delivers unprecedented web scale data storage capabilities to handle billions of identities along with the security, application and device data associated with each profile.  The UnboundID Data Broker is designed to manage real-time policy-based decisions according to profile data. The UnboundID Data Sync uses high throughput and low latency to provide real-time data synchronization across organizations, disparate data systems or even on-premise and cloud components.  Finally, the UnboundID Analytics Engine gives you the information you need to optimize performance, improve services and meet auditing and SLA requirements.

Identity and Data Governance

SailPoint provides industry leading IAM governance capabilities for both on-premise and cloud-based scenarios.  IdentityIQ is Sailpoint’s on-premise governance-based identity and access management solution that delivers a unified approach to compliance, password management and provisioning activities. IdentityNow is a full-featured cloud-based IAM solution that delivers single sign-on, password management, provisioning, and access certification services for cloud, mobile, and on-premises applications.  SecurityIQ is Sailpoint’s newest offering that can provide governance for unstructured data as well as assisting with data discovery and classification, permission management and real-time policy monitoring and notifications.

Cloud/SaaS SSO, Privileged Access and EMM

Finally, Centrify provides advanced privileged access management, enterprise mobility management, cloud-based access control for customers across industries and around the world.  The Centrify Identity Service provides a Software as a Service (SaaS) product that includes single sign-on, multi-factor authentication, enterprise mobility management as well as seamless application integration.  The Centrify Privilege Service provides simple cloud-based control of all of your privileged accounts while providing extremely detailed session monitoring, logging and reporting capabilities.  The Centrify Server Suite provides the ability to leverage Active Directory as the source of privilege and access management across your Unix, Linux and Windows server infrastructure.

With the addition of these three vendors, PMG can help address key gaps in a customer’s IAM capability.   To better understand the eight levers of IAM Maturity and where you may have gaps, take a look this blog by our CEO, Keith Squires about the IAM MAP.  Please reach out to see how PathMaker Group, using industry-leading products and our tried and true delivery methodology, can help get your company started on the journey to IAM maturity.

With today’s increasing Mobile Enterprise Security Threats, do you have a strategy to mitigate the risk on your Corporate Network?

Corporations are increasingly utilizing mobile enterprise systems to meet their business objectives, allowing mobile devices such as smart phones and tablets to access critical applications on their corporate network.  These devices provide advanced technologies over traditional desktop clients, such as: information sharing, access from anywhere at any time, data sensors, location, etc. But what makes these mobile devices desirable, by their very nature, also poses a new set of security challenges.  Reports by research agencies in recent years show an alarming trend in mobile security threats listing as top concerns: Android malware attacks, and for the IOS platform issues with enterprise provisioning abuse and older OS versions.

These trends highlight the need for corporations to start taking seriously a mobile security strategy at the same level to which cyber criminals are planning future attacks. A mobile security strategy might involve adopting certain Mobile Security Guidelines as published by standards organizations (NIST) and Mobile OWASP project. See the references at the end of this document:

The following guidelines are a subset of Mobile Security Guidelines I pulled from various published sources with most coming from NIST. It is by no means a comprehensive list, however they can be considered as a starting point or additional considerations for an existing mobile security strategy.

1 – Understand the Mobile Enterprise Architecture

You should start with understanding and diagramming the flow from mobile application to business applications running on the back-end application server. This is a great starting point and should be done at the beginning stages, as most of the security guidelines will depend on what is known about the architecture.

  1. Is the mobile application a native application or mobile web application? Is it a cross-platform mobile application?
  2. Does the mobile application use middleware to get to the back-end API, or does it connect directly to a back-end Restful based Web Service?
  3. Does the mobile application connect to an API gateway?

2 – Diagram the network topology of how the mobile devices connect

Is the mobile device connecting to the business application servers over the cellular network or internally through a private WiFi network, or both? Does it go through a proxy or firewall? This type of information will aid in developing security requirements; help with establishing a QA security test bed and monitoring capability.

3 – Develop Mobile Application Security Requirements

At a high level, a security function must protect against unauthorized access and in many cases protect privacy and sensitive data. In most cases, building security into mobile applications is not at the top of the mind-set in the software development process. As such, these requirements should be gathered as soon as possible in the Software Development Life Cycle (SDLC). It has been my personal experience in many cases that you have to work with application software developers in adopting best security practices. So the sooner you can get that dialogue going the better. Security objectives to consider are:  Confidentiality, integrity, and availability. Can the mobile OS platform provide the security services required? How sensitive is the data you are trying to protect. Should the data be encrypted in transit, and in storage? Do you need to consider data-in-motion protection technologies?  Should an Identity and Access Management (IDAM) solution be architected as part of the mobile enterprise system? Should it include a Single Sign On functionality (SSO)? Should there be multi-factor authentication, role based or fine-grained access control? Is Federation required? Should the code be obfuscated to prevent reverse engineering?

4 – Incorporate a Mobile Device Security Policy

What types of mobile devices should be allowed to access the organization’s critical assets. Should you allow personal mobile devices, Bring Your Own Devices (BYOD’s) or consider only organization-issued or certified mobile devices to access certain resources? Should you enforce tiers of access? Centralized mobile device management technologies are a growing solution for controlling the use of both organization-issued and BYOD’s by enterprise users. These technologies can remotely wipe the data or lock the password from a mobile device that has been lost or stolen. Should Enterprises consider anti malware software and OS upgrades to become certified mobiles on the network? To reduce high risk mobile devices, consider technologies that can detect and ban mobile devices that are jail broken or rooted, as these can pose the greatest risk of being compromised by hackers.

5 – Application Security Testing

According to a study performed by The Ponemon Institute, nearly 40% of 400 companies surveyed were not scanning their applications for security vulnerabilities, leaving the door wide open for cyber-attacks. This highlights the urgency for security teams to put together some sort of security vetting process to identify security vulnerabilities and validate security requirements as part of an ongoing QA security testing function. Scanning application technologies typically conduct two types of scanning methods: Static Application Security Testing (SAST) which analyzes the source code and Dynamic Application Security Testing (DAST), which sends modified HTTP requests to a running web application to exploit the application vulnerabilities. As the QA scanning process develops, it can be automated and injected into the software build process to detect security issues in the early phases of the SDLC.

6 – System Threat Model, Risk Management Process

What will typically come out of the application scanning process will be a list of security vulnerabilities found as either noise, suspect or definitive.  It will then be up to the security engineers knowing the system architecture and network topology working with the application developer to determine whether the vulnerability results in a valid threat and what risk level based on the impact of a possible security breach. Once the risk for each application is determined, it can be managed through an enterprise risk management system where vulnerabilities are tracked, fixed and the risk brought down to a more tolerable level.

7 – Consider implementing a Centralized Mobile Device Management System

Depending on the Mobile Security Policy that is in place, you may want to consider implementing a Centralized Mobile Device Management System especially when Bring Your Own Device (BYOD) mobiles are in the mix that can:

  • For mobile devices, manage certificates, security setting, profiles, etc through a directory service or administration portal.
  • Policy based management system to enforce security settings, restrictions for organization-issued, BYOD mobile devices.
  • Manage credentials for each mobile device through a Directory Service.
  • Self service automation for BYOD and Reducing overall administrative costs.
  • Control which applications are installed on organization-issued applications and check for suspect applications on BYOD mobile devices.
  • A system that can remotely wipe or lock a stolen or loss phone.
  • A system that can detect Jail-broken or rooted mobile devices.

8 – Security Information and Event Management (SIEM)

Monitor mobile device traffic to back-end business applications. Track mobile devices and critical business applications and correlate with events and log information looking for malicious activity based on threat intelligence. On some platforms it may be possible to integrate with a centralized risk management system to specifically be on alert for suspicious mobile events correlated with applications at higher risk.

References:

Your journey toward IAM maturity requires the right MAP

Oracle Identity and Access Management with EM12c: Red Pill or Blue Pill?

It seems all too often that when users are unable to access an end-user business function protected by a IDAM (Identity and Access Management) solution, the IDAM system gets the brunt of the blame and in a lot of cases without justification. Today’s corporate web based business functions are comprised of complex systems based on a service oriented applications.  As such, it can be difficult to diagnose particular issues in a timely manner to preclude having to restart several components. As the issue persists, security controls may be removed or bypassed all together resulting in another set of problems. In many cases the root cause does not get identified and a repeat incident occurs.

Example Use Case

Consider a system that hosts a web application providing an end-user business function to allow users to sign up for service and be able to pay their bills online. To protect the web application, an Oracle IDAM system, referred to as the SSO Stack, is implemented to provide access control and data protection for the end-users. As you can see, there are a lot of complicated flows and dependencies in the systems.

TomBlogFigure1

Suppose an issue has been reported by an end-user and technical support personnel are logged in to try and resolve the issue. To illustrate the complexity of the issue, suppose an end-user cannot access the system to pay their bill. Without having an in-depth knowledge of what is going on inside the systems, it is difficult to determine if the web application is the problem or if the problem is related to the SSO Stack. If it is the SSO Stack, which component is at fault?

Remember the movie, the matrix, “take the red pill” and find out what is really going on in the matrix. “Take the blue pill” and you live in ignorance and bliss. When troubleshooting systems, the tendency is to: collect and analyze logs on each of the system components independently, trouble-shoot at the network level, and execute manual user tests, all time consuming. How many times have you heard someone say “I can ping the server just fine” yet the problem persists.

TomBlogFigure2

“What if I told you”, testing at the application layer provides a more accurate indication of what is really going on inside the system. The business functionality is either working as intended or it is not.  Applications performing the business functions can be modeled as services and tested in real-time. Service tests can measure the end-user’s ability to access a service and if automated, allow issues to be resolved before end-user complaints start rolling in. Service tests strategically placed in each critical subsystem can be used as health checks determining which system component may be at fault if there are reported issues.

EM12c Cloud Control Service Model

With EM12c Cloud control, business functions can be modeled as services to be monitored for availability and performance.  Systems can be defined based on target components hosting the service. As a service is defined, it is associated with a system and one or more service tests. Service tests emulate the way a client would access the service and can be set up using out-of-the-box test frameworks: web testing automation, SQL timing, LDAP, SOAP, Ping tools, etc. and can be extended through Jython based scripting support.  The availability of a service can be determined by the results of service tests or the system performance metrics. The results of the system metrics can be utilized in system usage metrics and in conjunction with service level agreements (SLAs). Additionally, aggregate services can be modeled to consist of sub-services with the availability of the aggregate service dependent on the availability of each of the individual sub-services.

Example Use Case Revisited with EM12c Service Model

Revisiting the issue reported in the previous use-case, it was not a trivial task in determining whether it was or was not an SSO issue and which component or components were at fault.  Now consider modeling the consumer service and running web automation end-user service tests against the web application. Consider the SSO stack as a service modeling the Identity and Access Management functionality. The SSO Stack can be defined as an aggregate service with the following subservices: SSO Service, STS Service, Directory Service and Database Service. The availability and performance of the SSO Stack can be measured based on the availability and performance of each of the subservices within the SSO Stack chain. Going back to the problem reported in fig 1, the end-user could not access the web application to pay their bill. Suppose service tests are set up to run at the various endpoints as illustrated in figure 3.  As expected, the end-user service tests are showing failures. If the service tests for the Directory Service and Database are passing, it can be concluded the problem is within the OAM server component. Looking further into the results of the SSO Service and STS Service the problematic application within the OAM server can be determined. As this illustration points out, Service tests provide a more systematic way of trouble shooting and can lead you to a speedier resolution and root cause.

TomBlogFigure3

Em12c Cloud Control Features

The following are some of the features available with the EM12 Cloud Control monitoring solution to provide the capabilities as mentioned not available from the basic Enterprise Manager Fusion Middleware Control.

  1. Service Management:
    1. Service Definition: Defining a service as it relates to a business function. Modeling services from end-to-end with aggregate services.
    2. Service tests: Web traffic, SOAP, Restful, LDAP, SQL, ping etc. to determine end-user service and system level availabilities and performance.
    3. System monitoring. Monitoring a group of targets that represent a system that is intended to provide a specific business function.
    4. Service level agreements (SLAs) with monitoring and reporting for optimization.
  1. Performance monitoring
    1. Defining thresholds for status, performance and alerts
    2. Out-of-the-box and custom available metrics
    3. Real-time and historical metric reporting with target comparison
    4. Dashboard views that can be personalized.
    5. Service level agreement monitoring
  1. Incident reporting based on availability and performance threshold crossing, escalation and tracking from open to closure. Can also be used to track SLAs.
  2. System and service topology modeling tool for viewing dependencies. Can help with performance and service level optimization and root cause analysis.
  3. Oracle database availability and performance monitoring:
    1. Throughput transaction metrics on reads, write and commits
    2. DB wait time analysis
    3. View top SQL and their CPU consumption by SQL ID.
    4. DBA task assistance:
      1. Active Data Guard and standby Management
      2. RMAN backup scheduling
  • Log and audit monitoring
  1. Multi-Domain management: Production, Test, Development with RBAC rules. All domains from one console.
  2. Automated discovery of Identity Management and fusion middleware Components
  3. Plug-ins from 3rd party and developer tools with Jython scripting support to extend service tests, metrics etc.
  4. Log pattern matching that can be used as a customizable alerting mechanism and performance tool.
  5. Track and compare configurations for diagnostics purposes.
  6. Automated patch deployment and management.
  7. Integration of the system with My Oracle Support

As a final note and why it is referred to as EM12 Cloud Control

One of the advanced uses of Oracle Enterprise Manager 12c is being able to manage multiple phases of the cloud lifecycle—such as the planning, set up, build, deployment, monitoring, metering/chargeback, and optimization of the cloud. With its comprehensive management capabilities for clouds, Oracle Enterprise Manager 12c enables rapid deployment and end-to-end monitoring of infrastructure as a service (IaaS), platform as a service (PaaS)—including database as a service (DBaaS), schema as a service (Schema-aaS), and middleware as a service (MWaaS).

Three Characteristics of a Mature Identity and Access Management Program

Identity and Access Management (“IAM”) as an industry started gaining significant recognition and momentum around 2003. During these last 12 years, we’ve seen product vendors come and go, we’ve seen industry consolidation, and we’ve seen important product innovation driven by real business need.

While all this has been going on, many companies have leveraged IAM products to achieve important and significant gains in security, efficiency and compliance enforcement. On the other hand, some companies have tried and tried to establish effective IAM programs only to fail in their attempts to affect real change.

What makes one company succeed and another one fail while attempting to leverage the same products and technologies? What are the characteristics of a truly mature IAM program?

Over the next few weeks, I will attempt to address these questions. I also hope to create an important dialogue among those of you who have “been at it” for the last 5-10 years and have seen and been part of great successes and colossal failures. Although I have been part of hundreds of IAM projects, and will lend my experience to the discussion, you, as the readers and contributors, may have much more to contribute to make this topic come alive. Will you help?

Let’s get started with three important characteristics of a mature IAM program. This list is not exhaustive but these capabilities are common among organizations that have made IAM a strategic part of the IT infrastructure.

#1 – User Identity Integration

Pieces and parts of a user’s identity can exist across many different systems in an enterprise. HR systems are an obvious source along with IT systems like Active Directory. Then there is the badge or physical access system, the phone system, and various business applications that become critical for a user to perform their role. Before long, keeping up with all these disparate systems and keeping user attributes current becomes unmanageable. Most organizations recognize the problem and also recognize the need for a consolidated view of a user’s identity. It seems simple enough, but it takes planning, time and good processes to move an organization down the road to centralizing processes, automating synchronization, and removing redundant identity attributes from across the enterprise.

#2 – Account Provisioning

Creating an account on an appropriate system with the correct permissions is a straightforward task when you’ve been given the right information and you have the time to get it done. When a company grows to around 3,000 employees, the enterprise reaches a tipping point where going about this using people and manual effort becomes untenable. Too many requests for new accounts, or too many changes to existing accounts, or repeated requests to remove accounts for terminated employees all begin to pile up. This creates a backlog delaying new workers from getting started, hampering productivity, or creating security exposures where accounts of terminated employees remain active far too long.   Centralizing and/or standardizing the process can help but adding technology that provides automation will speed up the process along with enforcing identity standards, access entitlements, and important policies and standards. Automatic account removal of terminated employees is also a significant gain. All accounts on key systems can also be tied back to a central, validated user account eliminating unknown, orphaned user ids from across the enterprise.

#3 – Password Management

Password management activities face a similar challenge as an organization grows and adds more and more people, systems and applications. Initial steps should be to provide tools to help desk personnel centralize and automate this activity. Ultimately an organization needs to move this function away from the help desk and enable the end user to manage his own passwords on key systems, including resetting their own Active Directory password. This is another step that seems simple on the surface but can actually take a significant amount of planning and coordination to get it right and keep it running smoothly. Organizations that make a misstep on their first attempt find it difficult to gain user adoption the second (or third) time around. Eventually, standardized help desk procedures can assist the user community in adopting the self-service approach to managing passwords.

 

Identity integration, provisioning and password management are three essential building blocks, but there are another 8 – 10 key capabilities we could discuss that should be considered when talking about IAM maturity. What other capabilities would you consider to be essential building blocks? Please contribute to the discussion.

Up next, let’s talk about the essentials for planning a long-term, mature IAM program. If you’re just getting started or have been struggling to make progress, what are some of the keys to putting plans in place that can be effectively executed?

Processing Multiple Attribute Values with the TDI 7.1 FOR-EACH Attribute Connector

In previous versions of the IBM TDI product the task of processing a report or directory integration has been a challenge when working with attributes that may have more than one value.

One such example that we see frequently with the ISIM product is the erroles attribute assigned to the ISIM Person record. Luckily with IBM TDI 7.1 we have a new connector type that allows us to easily process these called the FOR-EACH Attribute loop. The following is a demonstration of how this function connector works in a simple report generating TDI.

In this example we will be Iterating all Person records contained in ISIM. The connector looks like this:

image1top

The Search Base & Search Filter should something along the lines of:

image2

(Note: with a Search Filter of (erroles=*) only Person Records that contain at least 1 role will be selected).

The next step is to make use of the FOR-EACH Attribute connector. The iterator has already loaded the work.erroles object which may contain multiple values. This attribute contains the DN of the role assigned to the ISIM Person which we will need to translate into a Role Name (errolename).

We will need to define a Work Attribute Name & a Loop Attribute Name. The Work attribute is the incoming multi-value attribute from the iterator, in this case erroles. The Loop Attribute Name is the single value attribute at the coordinate for the current loop count.

image3

Once this has been defined, we can add connectors below to lookup & then record the data for each pass of the Role Loop. First we do a lookup using the DN of the role (loopRole) to resolve the role’s name (errolename).

image4

image5

Since the work.errolename value will be over-written with each pass of the Role Loop, we will need to store, or write the value to file before finishing the loop & moving on to the next value of erroles. In this example I have inserted a File Connector to write the report as it is being passed through the loop. However there are other options available such as storing the values in an array eg. work.errolename & then looping through them using JavaScript in a connector further on in the TDI.

image6

Considerations in Role Design

The connected world of business today requires that people perform various tasks culminating in a desired goal while having different responsibilities. The level of complexity creates problems for anyone managing access to various systems. Roles, being a collection of permissions to be assigned to users, are being used to manage the complexity.

Design of roles is fast becoming a subject of interest in both the academic and corporate world for obvious reasons. Every organization attempts to perform its own exercise, due to the fact that they all have different functional needs and structures. The attempt is made mainly as a bottom-up or top-down exercise. It must be understood that role design is not an activity; it is a process which keeps evolving. The changes can occur due to corporate restructuring or business demands.

In both cases, the roles can be structured in multi-level hierarchies to represent business and technical capabilities. These levels can depend on the complexity of the existing system and the number of functionalities that needs to be assigned; the bigger the size of the business, the greater the number of roles and levels. Read more