DataCenterAutomation DataSheet
DataCenterAutomation DataSheet
IT Operations Management
2
Provision container server farms. Provision commands or retrieve compliance information
Docker-based Kubernetes clusters. Out-of- in the channel. Users are authenticated against
the-box customizable provisioning templates for the DCA IDM using HuBot Enterprise to ensure
Kubernetes/Docker clusters are used to deploy that the slack user has the required permissions
completely configured container infrastructure. to execute the command requested. Invite other
Worker nodes can be provisioned and pointed team members to participate in diagnosis and
to a selected master. remediation of compliance issues in a conversa-
tion-like manner. Obtain resource group compli-
Process Orchestration ance status and information, watch resources for
Orchestration workflows. Thousands of OOTB a change in compliance status, and remediate
workflows are provided to perform orchestrated non-compliance issues from the slack channel.
tasks across the datacenter for servers, data-
base, and middleware. Virtualized Infrastructure Optimization
Performance statistics. View performance,
Orchestrate any provision, patch, or compli- utilization, and capacity of virtual environments.
suite configuration tasks including installs and
ance process. Integrate with 3rd party tools Quickly identify wastage and reduce sprawl
upgrades.
and existing content by creating workflows that caused by idle or oversized VMs.
invoke vendor APIs (SOAP, REST, PowerShell,
High Availability (HA) PostgreSQL clustered
etc.) or open source scripts. Run orchestration Infrastructure planning. Best-fit placement
databases. The DCA suite on CDF uses an em-
workflows using the UI or the open APIs. suggestions for new workloads help determine
bedded HA PostgreSQL cluster which provides where a new VM can be provisioned and how the
resiliency and increased performance capacity.
Create and debug workflows. Create or mod- environment should be sized based on historical
ify workflows using the drag and drop workflow In the event any one pod per PostgreSQL cluster usage trends and available capacity.
studio. Workflows are created with variable fails, DCA services will continue to run without
placeholders for parameter inputs (credentials, disruption. Forecast Reports. Forecast reports use histori-
IPs, etc.) to be highly reusable. Debug and test cal consumption and performance data trends
workflows in the studio before placing them into Scale horizontally. Easily scale DCA for greater to determine the number of days until the re-
production. resource capacity by adding new Kubernetes source will reach capacity.
worker nodes and/or configuring multiple mas-
ter nodes. Worker nodes can be added from
Containerized Suite Deployment Option New Features
the CDF UI. Once credentials are provided for
Container Deployment Foundation (CDF) is the • Containerized suite deployment option
the new worker node, CDF installs Kubernetes/
foundation required to install the new contain-
Docker on the node. When CDF completes the • Service Level Objective (SLO) policy based
erized version of DCA. CDF has a simple install
provisioning of the new worker node it is added dynamic patching and regulatory compliance
process and once installed handles all provi-
to the cluster and begins to accept workloads • Puppet Integration
sioning, orchestration, and management of the
from the master.
underlying core Kubernetes/Docker cluster in- • Dynamic patching with exception management
frastructure. The CDF UI is a single portal used
Headless Operation. Because CDF is built on • CVE Risk Dashboard
for DCA suite and CDF platform management
open APIs, any DCA on CDF feature is available • Compliance Dashboards
tasks such as installs or upgrades.
using RESTful APIs. These APIs enable the full
• Deployment of Docker-based Kubernetes
DCA suite management. Monitor job queues capacity of DCA to be leveraged from any tech- clusters
and check the health and status of individual nology capable of consuming an API.
• APIs for headless operations
service pods from the analytics dashboard.
ChatOps Collaboration • High Availability (HA) with PostgreSQL
Debug issues by viewing log files and con-
clustered databases
figuration files from the UI. Create and man- Collaborate with systems and teams. A Slack
age suite namespaces and perform other channel can be used to run DCA compliance • ChatOps collaboration tool
www.microfocus.com 3
System Requirements1 Operating Systems
• RHEL x86_64 (ver 7.3)
DCA on CDF Minimum Hardware Requirements • Oracle Enterprise Linux x86_64 (ver 7.3)
• (1 DCA Server) 8 CPU / 32 GB RAM / • CentOS x86_64 (ver 7.3)
200 GB HDD
• (1 NFS Server) 4 CPU / 8 GB RAM / Languages
100 GB HDD • Localization is not supported in DCA
Express Premium
Provisioning Patching, Ultimate
and compliance and Infrastructure
configuration remediation optimization
Provisioning
Server Discovery, Config, OS Provisioning X X X
and SW Deployment2
Database and Middleware Discovery, X X X
Config and Deployment
Infrastructure3 LCM with Runbook X X X
Automation and Reporting
Compliance
Patching for Server OS X X
and Applications
Server Compliance, Audit and Remediation X X
(+subscription content)
Database and Middleware Patching X X
and Code release
Database and Middleware Compliance, Audit X X
and Remediation (+subscription content)
Database and Middleware X X
Upgrades and Migrations
Optimization
Server Infrastructure Analytics X
Virtual Infrastructure Capacity and Optimization X
Planning and Forecasting X
360-000114-002 | 8644 | H | 03/18 | © 2018 Micro Focus. All rights reserved. Micro Focus and the Micro Focus logo, among others, are trademarks or
registered trademarks of Micro Focus or its subsidiaries or affiliated companies in the United Kingdom, United States and other countries. All other marks
are the property of their respective owners.