VIDEO – THE 7 TENETS OF SUCCESSFUL IAM (SAILPOINT)
4. Mobile Access Management
Mobile has become the de facto way to access cloud apps requiring you to ensure security and enable functionality of users devices. This includes deploying appropriate client apps to the right device and ensuring an appropriately streamlined mobile experience. Unfortunately, most existing Identity and Access Management as a Service (IDaaS) solutions fall short when
it comes to mobile support because they were built and architected before it became clear that mobile devices (smart phones and tablets) were going to become the preeminent means to access apps. Instead, they are very web browser centric—i.e. their mobile IDaaS experience just supports web-based apps vs. also supporting rich mobile apps and device security. They also
provide no means to ensure that the user’s mobile device is trusted and secure, and while they may provision a user in the cloud service, they ignore giving the end user the corresponding app on their device.
Consequently, you should look for an IDaaS solution that allows your users to enroll their mobile devices and deliver strong authentication mechanisms (using PKI certificates). The solution should let you apply mobile device-specific group policies to ensure the underlying device is secure (e.g., ensure that a PIN is required to unlock the phone, etc.), detect jailbroken or rooted devices, and allow you to remotely lock, unenroll or wipe a lost or stolen device. Once you associate the device with a user and can trust the device you can leverage the device as an identifying factor for the user in cases where additional factors are required for multifactor and step-up athentication.
The solution should also provide unified app management for both web-based and mobile client apps. This ensures that users are not left with partial access or access defined and managed in separate silos of access management such as separate mobile device management solutions (MDM). Both app and mobile management should share the same roles, identities, management tools, reports and event logs. This unification of mobile and app access management reduces redundant tools, processes and skillsets.
Deciding if you should upgrade your identity and access management environment can be a daunting task. Although there are many variables and decision-making points involved, the “if” decision usually falls into one of two camps:
- The software is nearing its’ support end-of-life.
- There is a need to utilize new services available in the latest release.
Let’s take a look at the first camp. The end-of-life of a particular software product is tied directly to its vendor’s support. This is a very important consideration due to the potential worst case scenario. Imagine software currently running in production where its support has been deprecated by the vendor. Then when a major issue occurs, technical staff reaches out to the vendor with an explanation of the problem, only to hear “sorry, we can’t help you”. Unless in-house staff can diagnose and find a solution to the problem, there could be a very real long-lasting disruption of service. The old adage “if it ain’t broke, don’t fix it” is not always the best mantra to follow with your identity and access management software. Although it is not critical to constantly upgrade to the latest and greatest release, it is recommended to be several steps ahead of a product’s end-of-life. This is due to not only the potential issue above, but also because vendors include critical items, such as security fixes and performance enhancements, as part of their newest releases.
How about the second camp? Let’s take a company that is utilizing a single sign-on software product or version that is a few years old. Granted, the solution is working well, however, there is now a need to integrate mobile and social technologies for their customer base. Seeing as their current software version does not support this, but the newest version does, the obvious choice would be to upgrade. Or, as a second illustration, a company may have created a custom connector, but that connector now ships out-of-the-box with the newest version. By upgrading, they would no longer have the overhead of updating and maintaining their code.
PathMaker Group has been working in the Identity and Access Management space since 2003. We take pride in delivering quality IAM solutions with the best vendor products available. As the vendor landscape changed with mergers and acquisitions, we specialized in the products and vendors that led the market with key capabilities, enterprise scale, reliable customer support and strong partner programs. As the market evolves to address new business problems, regulatory requirements, and emerging technologies, PathMaker Group has continued to expand our vendor relationships to meet these changes. For many customers, the requirements for traditional on premise IAM hasn’t changed. We will continue supporting these needs with products from IBM and Oracle. To meet many of the new challenges, we have added new vendor solutions we believe lead the IAM space in meeting specific requirements. Here are some highlights:
UnboundID offers a next-generation IAM platform that can be used across multiple large-scale identity scenarios such as retail, Internet of Things or public sector. The UnboundID Data Store delivers unprecedented web scale data storage capabilities to handle billions of identities along with the security, application and device data associated with each profile. The UnboundID Data Broker is designed to manage real-time policy-based decisions according to profile data. The UnboundID Data Sync uses high throughput and low latency to provide real-time data synchronization across organizations, disparate data systems or even on-premise and cloud components. Finally, the UnboundID Analytics Engine gives you the information you need to optimize performance, improve services and meet auditing and SLA requirements.
Identity and Data Governance
SailPoint provides industry leading IAM governance capabilities for both on-premise and cloud-based scenarios. IdentityIQ is Sailpoint’s on-premise governance-based identity and access management solution that delivers a unified approach to compliance, password management and provisioning activities. IdentityNow is a full-featured cloud-based IAM solution that delivers single sign-on, password management, provisioning, and access certification services for cloud, mobile, and on-premises applications. SecurityIQ is Sailpoint’s newest offering that can provide governance for unstructured data as well as assisting with data discovery and classification, permission management and real-time policy monitoring and notifications.
Cloud/SaaS SSO, Privileged Access and EMM
Finally, Centrify provides advanced privileged access management, enterprise mobility management, cloud-based access control for customers across industries and around the world. The Centrify Identity Service provides a Software as a Service (SaaS) product that includes single sign-on, multi-factor authentication, enterprise mobility management as well as seamless application integration. The Centrify Privilege Service provides simple cloud-based control of all of your privileged accounts while providing extremely detailed session monitoring, logging and reporting capabilities. The Centrify Server Suite provides the ability to leverage Active Directory as the source of privilege and access management across your Unix, Linux and Windows server infrastructure.
With the addition of these three vendors, PMG can help address key gaps in a customer’s IAM capability. To better understand the eight levers of IAM Maturity and where you may have gaps, take a look this blog by our CEO, Keith Squires about the IAM MAP. Please reach out to see how PathMaker Group, using industry-leading products and our tried and true delivery methodology, can help get your company started on the journey to IAM maturity.
Architecting mature and functional IAM strategies for our clients requires us to frequently reflect on the approaches that we have seen organizations take to solve common (and sometimes mundane) problems. One such such problem is that of initial credential distribution for internal user constituents (employees, contractors, temp workers, etc). How an organization creates and communicates a new user’s credential is really one of the first steps in a chain of maintaining a good security posture in the space of identity provisioning.
At the core of this problem is the issue of non-repudiation. Basically, the ability to say that a given account owner was the only person who could utilize their credentials to access any given information system. More information on non-repudiation can be found here.
Over the years working in the IAM field, I’ve seen customers approach the problem of getting credentials to newly created users in different ways. Some (surprisingly many) choose to have their IT departments create new user accounts using a known password or formula (such as: <user_lastname><month><day>) in the newly created system. The issue with this approach is that there is no real guarantee that the account will not be used by a third party prior to or after distribution before the intended user begins to use them. This presents an obvious security issue that can be slightly mitigated by requiring a user to change their password after the 1st use. But, even forcing a user to change their password doesn’t completely solve this issue.
A more mature approach is to have a random password generated that complies with corporate password policies that is then communicated to the user through the IT department or the user’s manager. This still leaves the issue of non-repudiation, since whoever generates and communicates the credential to the user or manager also has knowledge of the credential. However, this approach limits the knowledge of this credential to only those in the chain of custody of the credential, instead of everyone who has been exposed to the ‘standard known password’ or password formula.
The most mature and effective way to address this issue usually involves implementing some sort of ‘account claiming’ mechanism. In this approach, a provisioning system or process generates a random system generated password that is never known to any person. Additionally, a system generated ‘claim token’ is generated that is then submitted to the user that can only be utilized once and within a specific time frame of issuance. The intended user is then directed to visit an internal account claiming site where they are asked for some personally identifying information (PII) along with their ‘claim token’. Once this information is verified, the user is directed to change their password, which is then communicated to the provisioning system and all downstream information systems. Identity provisioning platforms such as those from Oracle, IBM, and Sailpoint all make available the tools required to develop/configure this solution with minimal effort. This approach more effectively protects the integrity of the credential and greatly increases an organization’s IAM security posture with very little overall implementation effort.
This article is part 1 of a multi-part series that dives into specific concepts covered during our IAM MAP activities. More information about the Pathmaker Group IAM map can be found here.
Corporations are increasingly utilizing mobile enterprise systems to meet their business objectives, allowing mobile devices such as smart phones and tablets to access critical applications on their corporate network. These devices provide advanced technologies over traditional desktop clients, such as: information sharing, access from anywhere at any time, data sensors, location, etc. But what makes these mobile devices desirable, by their very nature, also poses a new set of security challenges. Reports by research agencies in recent years show an alarming trend in mobile security threats listing as top concerns: Android malware attacks, and for the IOS platform issues with enterprise provisioning abuse and older OS versions.
These trends highlight the need for corporations to start taking seriously a mobile security strategy at the same level to which cyber criminals are planning future attacks. A mobile security strategy might involve adopting certain Mobile Security Guidelines as published by standards organizations (NIST) and Mobile OWASP project. See the references at the end of this document:
The following guidelines are a subset of Mobile Security Guidelines I pulled from various published sources with most coming from NIST. It is by no means a comprehensive list, however they can be considered as a starting point or additional considerations for an existing mobile security strategy.
1 – Understand the Mobile Enterprise Architecture
You should start with understanding and diagramming the flow from mobile application to business applications running on the back-end application server. This is a great starting point and should be done at the beginning stages, as most of the security guidelines will depend on what is known about the architecture.
- Is the mobile application a native application or mobile web application? Is it a cross-platform mobile application?
- Does the mobile application use middleware to get to the back-end API, or does it connect directly to a back-end Restful based Web Service?
- Does the mobile application connect to an API gateway?
2 – Diagram the network topology of how the mobile devices connect
Is the mobile device connecting to the business application servers over the cellular network or internally through a private WiFi network, or both? Does it go through a proxy or firewall? This type of information will aid in developing security requirements; help with establishing a QA security test bed and monitoring capability.
3 – Develop Mobile Application Security Requirements
At a high level, a security function must protect against unauthorized access and in many cases protect privacy and sensitive data. In most cases, building security into mobile applications is not at the top of the mind-set in the software development process. As such, these requirements should be gathered as soon as possible in the Software Development Life Cycle (SDLC). It has been my personal experience in many cases that you have to work with application software developers in adopting best security practices. So the sooner you can get that dialogue going the better. Security objectives to consider are: Confidentiality, integrity, and availability. Can the mobile OS platform provide the security services required? How sensitive is the data you are trying to protect. Should the data be encrypted in transit, and in storage? Do you need to consider data-in-motion protection technologies? Should an Identity and Access Management (IDAM) solution be architected as part of the mobile enterprise system? Should it include a Single Sign On functionality (SSO)? Should there be multi-factor authentication, role based or fine-grained access control? Is Federation required? Should the code be obfuscated to prevent reverse engineering?
4 – Incorporate a Mobile Device Security Policy
What types of mobile devices should be allowed to access the organization’s critical assets. Should you allow personal mobile devices, Bring Your Own Devices (BYOD’s) or consider only organization-issued or certified mobile devices to access certain resources? Should you enforce tiers of access? Centralized mobile device management technologies are a growing solution for controlling the use of both organization-issued and BYOD’s by enterprise users. These technologies can remotely wipe the data or lock the password from a mobile device that has been lost or stolen. Should Enterprises consider anti malware software and OS upgrades to become certified mobiles on the network? To reduce high risk mobile devices, consider technologies that can detect and ban mobile devices that are jail broken or rooted, as these can pose the greatest risk of being compromised by hackers.
5 – Application Security Testing
According to a study performed by The Ponemon Institute, nearly 40% of 400 companies surveyed were not scanning their applications for security vulnerabilities, leaving the door wide open for cyber-attacks. This highlights the urgency for security teams to put together some sort of security vetting process to identify security vulnerabilities and validate security requirements as part of an ongoing QA security testing function. Scanning application technologies typically conduct two types of scanning methods: Static Application Security Testing (SAST) which analyzes the source code and Dynamic Application Security Testing (DAST), which sends modified HTTP requests to a running web application to exploit the application vulnerabilities. As the QA scanning process develops, it can be automated and injected into the software build process to detect security issues in the early phases of the SDLC.
6 – System Threat Model, Risk Management Process
What will typically come out of the application scanning process will be a list of security vulnerabilities found as either noise, suspect or definitive. It will then be up to the security engineers knowing the system architecture and network topology working with the application developer to determine whether the vulnerability results in a valid threat and what risk level based on the impact of a possible security breach. Once the risk for each application is determined, it can be managed through an enterprise risk management system where vulnerabilities are tracked, fixed and the risk brought down to a more tolerable level.
7 – Consider implementing a Centralized Mobile Device Management System
Depending on the Mobile Security Policy that is in place, you may want to consider implementing a Centralized Mobile Device Management System especially when Bring Your Own Device (BYOD) mobiles are in the mix that can:
- For mobile devices, manage certificates, security setting, profiles, etc through a directory service or administration portal.
- Policy based management system to enforce security settings, restrictions for organization-issued, BYOD mobile devices.
- Manage credentials for each mobile device through a Directory Service.
- Self service automation for BYOD and Reducing overall administrative costs.
- Control which applications are installed on organization-issued applications and check for suspect applications on BYOD mobile devices.
- A system that can remotely wipe or lock a stolen or loss phone.
- A system that can detect Jail-broken or rooted mobile devices.
8 – Security Information and Event Management (SIEM)
Monitor mobile device traffic to back-end business applications. Track mobile devices and critical business applications and correlate with events and log information looking for malicious activity based on threat intelligence. On some platforms it may be possible to integrate with a centralized risk management system to specifically be on alert for suspicious mobile events correlated with applications at higher risk.
- Guidelines for managing the Security of Mobile Devices in the Enterprise http://csrc.nist.gov/publications/PubsSPs.html#800-124
- Vetting the security of Mobile of Mobile Applications http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-124r1.pdf
- Top 10 Mobile Risks https://www.owasp.org/index.php/Projects/OWASP_Mobile_Security_Project_-_Top_Ten_Mobile_Risks
We were sitting down with a client during some initial prioritization discussions in an Identity and Access Management (IAM) Roadmap effort, when the talk turned to entitlements and how they were currently being handled. Like many companies, they did not have a unified approach on how they wanted to manage entitlements in their new world of unified IAM (a.k.a. the end of the 3 year roadmap we were helping to develop). Their definition of entitlements also varied from person to person, much less how they wanted to define and enforce them. We decided to take a step back and really dig into entitlements, entitlement enforcement, and some of the other factors that come into play, so we could put together a realistic enterprise entitlement management approach. We ended up having a really great discussion that touched on many areas within their enterprise. I wanted to briefly discuss a few of the topics that really seemed to resonate with the audience of stakeholders sitting in that meeting room.
(For the purpose of this discussion, entitlements refer to the privileges, permissions or access rights that a user is given within a particular application or group of applications. These rights are enforced by a set of tools that operate based on the defined policies put in place by the organization. Got it?)
- Which Data is the Most Valuable?- There were a lot of dissenting opinions on which pieces of data were the most business critical, which should be most readily available, and which data needed to be protected. As a company’s data is moved, replicated, aggregated, virtualized and monetized, a good Data Management program is critical to making sure that an organization has handle on the critical data questions:
- What is my data worth?
- How much should I spend to protect that data?
- Who should be able to read/write/update this data?
- Can I trust the integrity of the data?
- The Deny Question – For a long time, Least Privilege was the primary model that people used to provide access. It means that an entitlement is specifically granted for access and all other access is denied, thus providing users with exact privilege needed to do their job and nothing more. All other access is implicitly denied. New thinking is out there that says that you should minimize complexity and administration by moving to an explicit deny model that says that everyone can see everything unless it is explicitly forbidden. Granted, this model is mostly being tossed around at Gartner Conferences, but I do think you will see more companies that are willing to loosen their grip on the information that doesn’t need protection, and focus their efforts on those pieces of data that are truly important to their company.
- Age Old Questions – Fine-Grained vs. Coarse-Grained. Roles vs. Rules. Pirates vs. Ninjas. These are questions that every organization has discussed as they are building their entitlements model.
- Should the entitlements be internal to the application or externalized for unified administration?
- Should roles be used to grant access, should we base those decisions on attributes about the users, or should we use some combination?
- Did he really throw Pirates vs. Ninjas in there to see if we were still paying attention? (Yes. Yes, I did).
There are no cut and dry answers for these questions, as it truly will vary from application to application and organization to organization. The important part is to come to a consensus on the approach and then provide the application teams, developers and security staff the tools to manage entitlements going forward.
- Are We Using The Right Tools? – This discussion always warms my heart, as finding the right technical solution for customers IAM needs is what I do for a living. I have my favorites and would love to share them with you but that is for another time. As with the other topics, there really isn’t a cookie cutter answer. The right tool will come down to how you need to use it, what sort of architecture, your selected development platform, and what sort of system performance you require. Make sure that you aren’t trying to make the decisions you make on the topics above based on your selected tool, but rather choose the tool based on the answers to the important questions above.
Since 2003, our teams have been part of over 350 efforts to implement or rescue Identity and Access Management (“IAM”) projects. Our customers, most of whom have become great friends, are from all over the US, crossing many disparate and unique industries. In most cases, they are working to solve similar business problems and to fix or improve the same processes.
On many occasions, we started from the ground floor and had the opportunity to create a roadmap for their long-term strategy. To those familiar with IAM capabilities, it may seem obvious what to prioritize and where to start, but let’s not be so quick to jump to what looks like an easy decision. All IAM capabilities are not created equal.
Our job as a systems integrator is to successfully implement these complex IAM security technologies, and to ensure that our customers maximize the return on their significant IT investments. As we help guide our customers through these decisions, this ROI is always our priority, which leads us to the topic for today.
So how, exactly, can we accomplish this? This article is all about alignment. Having alignment between IT and the other key stakeholders will significantly reduce the risk of your IAM projects failing and losing or wasting precious budget dollars.
How can you ensure you have correct alignment? Here’s how our IAM Roadmap process breaks down.
Step 1 – We start by prioritizing a list of over 100 key IAM capabilities. This list was compiled from our work over the years and is vendor agnostic. After a brief explanation to help educate the stakeholders, we apply a ranking of low, medium or high based on their opinion on how important the organization needs a certain capability. Typical examples of high priority capabilities are automated provisioning of accounts, self-service password reset and role recertification.
Here’s a common example of the final output
Step 2 – Once you have a high priority list to work from, we dig a little deeper into three categories of analysis. The first category is Business Benefit – How significant is the true Business Benefit of this capability? Is this just a shiny new IT toy or will the stakeholders see lift and leverage from adopting this functionality. It’s critical to have your business stakeholders at the table so they can weigh in.
Step 3 – The second category is for your technical staff regarding Technical Complexity – How technically difficult will it be to configure or customize this solution? Are there products that provide this feature and function out of the box? Will your team be able to update and maintain the tools going forward or is this going to be way outside of their comfort zone and expertise? Is it cost prohibitive based on the benefit? This is where we can weigh in to help provide some context as well.
Step 4 – The third and last category is about Organizational Readiness – can the company readily adopt the capability? Are there so many competing priorities that gaining mindshare and focus will be difficult? Do you have the buy-in from stakeholder leadership? Is everyone at the table truly on board with this project and these priorities? Will they drive the effort through their organizations?
Step 5 – Once you’ve made it through this list and conducted a robust debate and Q&A with these three key questions, it’s time to score and rank the results. Amazingly you will see a handful of capabilities that float to the top where the Business Benefit is high, the Technical Complexity is low and the Organizational Readiness is high. After a short review of the results with some discussion and debate, a solid scope of high priority capabilities emerge as candidates for the first one or two phases of a successful program.
The next step is to choose a product that can fulfill these priorities, and then you are off and running. The advantage you have is that your stakeholders are more educated and have bought into the process and priorities – they are aligned. At the first sign of deviating from scope or questioning why we are including specific capabilities, you simply go back to the prior analysis and remind the team of the decision-making rationale.
Does this IAM Roadmap process guarantee a successful project or program? Not necessarily. But having all your stakeholders at the table and aligned provides a huge advantage and a great start.
It seems all too often that when users are unable to access an end-user business function protected by a IDAM (Identity and Access Management) solution, the IDAM system gets the brunt of the blame and in a lot of cases without justification. Today’s corporate web based business functions are comprised of complex systems based on a service oriented applications. As such, it can be difficult to diagnose particular issues in a timely manner to preclude having to restart several components. As the issue persists, security controls may be removed or bypassed all together resulting in another set of problems. In many cases the root cause does not get identified and a repeat incident occurs.
Example Use Case
Consider a system that hosts a web application providing an end-user business function to allow users to sign up for service and be able to pay their bills online. To protect the web application, an Oracle IDAM system, referred to as the SSO Stack, is implemented to provide access control and data protection for the end-users. As you can see, there are a lot of complicated flows and dependencies in the systems.
Suppose an issue has been reported by an end-user and technical support personnel are logged in to try and resolve the issue. To illustrate the complexity of the issue, suppose an end-user cannot access the system to pay their bill. Without having an in-depth knowledge of what is going on inside the systems, it is difficult to determine if the web application is the problem or if the problem is related to the SSO Stack. If it is the SSO Stack, which component is at fault?
Remember the movie, the matrix, “take the red pill” and find out what is really going on in the matrix. “Take the blue pill” and you live in ignorance and bliss. When troubleshooting systems, the tendency is to: collect and analyze logs on each of the system components independently, trouble-shoot at the network level, and execute manual user tests, all time consuming. How many times have you heard someone say “I can ping the server just fine” yet the problem persists.
“What if I told you”, testing at the application layer provides a more accurate indication of what is really going on inside the system. The business functionality is either working as intended or it is not. Applications performing the business functions can be modeled as services and tested in real-time. Service tests can measure the end-user’s ability to access a service and if automated, allow issues to be resolved before end-user complaints start rolling in. Service tests strategically placed in each critical subsystem can be used as health checks determining which system component may be at fault if there are reported issues.
EM12c Cloud Control Service Model
With EM12c Cloud control, business functions can be modeled as services to be monitored for availability and performance. Systems can be defined based on target components hosting the service. As a service is defined, it is associated with a system and one or more service tests. Service tests emulate the way a client would access the service and can be set up using out-of-the-box test frameworks: web testing automation, SQL timing, LDAP, SOAP, Ping tools, etc. and can be extended through Jython based scripting support. The availability of a service can be determined by the results of service tests or the system performance metrics. The results of the system metrics can be utilized in system usage metrics and in conjunction with service level agreements (SLAs). Additionally, aggregate services can be modeled to consist of sub-services with the availability of the aggregate service dependent on the availability of each of the individual sub-services.
Example Use Case Revisited with EM12c Service Model
Revisiting the issue reported in the previous use-case, it was not a trivial task in determining whether it was or was not an SSO issue and which component or components were at fault. Now consider modeling the consumer service and running web automation end-user service tests against the web application. Consider the SSO stack as a service modeling the Identity and Access Management functionality. The SSO Stack can be defined as an aggregate service with the following subservices: SSO Service, STS Service, Directory Service and Database Service. The availability and performance of the SSO Stack can be measured based on the availability and performance of each of the subservices within the SSO Stack chain. Going back to the problem reported in fig 1, the end-user could not access the web application to pay their bill. Suppose service tests are set up to run at the various endpoints as illustrated in figure 3. As expected, the end-user service tests are showing failures. If the service tests for the Directory Service and Database are passing, it can be concluded the problem is within the OAM server component. Looking further into the results of the SSO Service and STS Service the problematic application within the OAM server can be determined. As this illustration points out, Service tests provide a more systematic way of trouble shooting and can lead you to a speedier resolution and root cause.
Em12c Cloud Control Features
The following are some of the features available with the EM12 Cloud Control monitoring solution to provide the capabilities as mentioned not available from the basic Enterprise Manager Fusion Middleware Control.
- Service Management:
- Service Definition: Defining a service as it relates to a business function. Modeling services from end-to-end with aggregate services.
- Service tests: Web traffic, SOAP, Restful, LDAP, SQL, ping etc. to determine end-user service and system level availabilities and performance.
- System monitoring. Monitoring a group of targets that represent a system that is intended to provide a specific business function.
- Service level agreements (SLAs) with monitoring and reporting for optimization.
- Performance monitoring
- Defining thresholds for status, performance and alerts
- Out-of-the-box and custom available metrics
- Real-time and historical metric reporting with target comparison
- Dashboard views that can be personalized.
- Service level agreement monitoring
- Incident reporting based on availability and performance threshold crossing, escalation and tracking from open to closure. Can also be used to track SLAs.
- System and service topology modeling tool for viewing dependencies. Can help with performance and service level optimization and root cause analysis.
- Oracle database availability and performance monitoring:
- Throughput transaction metrics on reads, write and commits
- DB wait time analysis
- View top SQL and their CPU consumption by SQL ID.
- DBA task assistance:
- Active Data Guard and standby Management
- RMAN backup scheduling
- Log and audit monitoring
- Multi-Domain management: Production, Test, Development with RBAC rules. All domains from one console.
- Automated discovery of Identity Management and fusion middleware Components
- Plug-ins from 3rd party and developer tools with Jython scripting support to extend service tests, metrics etc.
- Log pattern matching that can be used as a customizable alerting mechanism and performance tool.
- Track and compare configurations for diagnostics purposes.
- Automated patch deployment and management.
- Integration of the system with My Oracle Support
As a final note and why it is referred to as EM12 Cloud Control
One of the advanced uses of Oracle Enterprise Manager 12c is being able to manage multiple phases of the cloud lifecycle—such as the planning, set up, build, deployment, monitoring, metering/chargeback, and optimization of the cloud. With its comprehensive management capabilities for clouds, Oracle Enterprise Manager 12c enables rapid deployment and end-to-end monitoring of infrastructure as a service (IaaS), platform as a service (PaaS)—including database as a service (DBaaS), schema as a service (Schema-aaS), and middleware as a service (MWaaS).
Who We Are
PathMaker Group is a specialized Security and Identity Management Consultancy, blending core technical and product expertise, consultative know-how, and extensive implementation experience.
635 Fritz Drive
Coppell, TX 75019
1250 Capital of Texas Hwy
Bldg 3, Suite 400
Austin, TX 78746