Workload Automation Blog – BMC Software | Blogs https://s7280.pcdn.co Tue, 17 Sep 2024 15:17:58 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Workload Automation Blog – BMC Software | Blogs https://s7280.pcdn.co 32 32 2024 Magic Quadrant™ for Service Orchestration and Automation Platforms (SOAP) https://www.bmc.com/blogs/soaps-service-orchestration-automation-platforms/ Tue, 17 Sep 2024 00:00:46 +0000 https://www.bmc.com/blogs/?p=17496 According to Gartner, Service Orchestration and Automation Platforms (SOAP) “enable I&O leaders to design and implement business services. These platforms combine workflow orchestration, workload automation and resource provisioning across an organization’s hybrid digital infrastructure.” The 2024 Gartner® Magic Quadrant™ for Service Orchestration and Automation Platforms (SOAP) is now available. This is the first Gartner Magic […]]]>

According to Gartner, Service Orchestration and Automation Platforms (SOAP) “enable I&O leaders to design and implement business services. These platforms combine workflow orchestration, workload automation and resource provisioning across an organization’s hybrid digital infrastructure.”

The 2024 Gartner® Magic Quadrant™ for Service Orchestration and Automation Platforms (SOAP) is now available. This is the first Gartner Magic Quadrant for SOAP, and we are pleased to announce that BMC has been named a Leader!

As a recognized industry expert, BMC prioritizes customer experience with a commitment to helping organizations maximize the value of our solutions.

“We are delighted to be recognized as a Leader in the inaugural Gartner Magic Quadrant for Service Orchestration and Automation Platforms report. This, we feel, is a testament to our customer relationships and helping them to achieve their evolving business initiatives over many years,” said Gur Steif, president of digital business automation at BMC. “We are continuing to invest in the future of the market focused on AI, data, and cloud innovations and are excited about our customers, our partners, and the opportunities ahead.”

We believe Control-M from BMC and BMC Helix Control-M simplify the orchestration and automation of highly complex, hybrid and multi-cloud applications and data pipeline workflows. Our platforms make it easy to define, schedule, manage and monitor application and data workflows, ensuring visibility and reliability, and improving SLAs.

In addition, BMC Software was recognized in the 2024 Gartner® Market Guide for DataOps Tools. As stated in this report, “DataOps tools enable organizations to continually improve data pipeline orchestration, automation, testing and operations to streamline data delivery.”

In the Magic Quadrant for SOAP, Gartner provides detailed evaluations of 13 vendors. BMC is named as a Leader, based on the ability to execute and completeness of vision.

Here’s a look at the quadrant.

2024 Magic Quadrant for SOAP

Download the full report to:

  • See why BMC was recognized as a Leader for SOAP
  • Learn about the latest innovations delivered in the category

Gartner, Magic Quadrant for Service Orchestration and Automation Platforms, by Hassan Ennaciri, Chris Saunderson, Daniel Betts, Cameron Haight, 11 September 2024

Gartner, Market Guide for DataOps Tools, by Michael Simone, Robert Thanaraj, Sharat Menon, 8 August 2024

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from BMC. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

]]>
Crack the (Transaction) Code to Get More Value from SAP® SM36 and SM37 https://www.bmc.com/blogs/crack-transaction-code-sm36-sm37/ Fri, 16 Aug 2024 10:36:24 +0000 https://www.bmc.com/blogs/?p=53812 SAP’s native job scheduling and monitoring tools, Transaction Codes (t-codes) SM36 and SM37, are fundamental components of many SAP systems. Control-M allows enterprises to get new value from these longtime system staples without requiring resource-intensive redevelopment. This blog explains how. Individual enterprises may have developed thousands of SM36 and SM37 t-codes to countless servers and […]]]>

SAP’s native job scheduling and monitoring tools, Transaction Codes (t-codes) SM36 and SM37, are fundamental components of many SAP systems. Control-M allows enterprises to get new value from these longtime system staples without requiring resource-intensive redevelopment. This blog explains how.

Individual enterprises may have developed thousands of SM36 and SM37 t-codes to countless servers and SAP instances to help manage the day-to-day tasks that keep SAP systems running. SM36 and SM37 codes clearly have an important place in SAP systems. Sometimes, users tell us that the problem is that the place for SM36 and SM37 is too clearly defined. SM36 and SM37 are system-specific, which means they can only be used to schedule and monitor jobs for the system on which they reside. That works fine for siloed jobs that execute completely within a single server or SAP instance. If the larger workflow needs to break out of that silo, for example to accept a data stream from a cloud-based system or to update a database on another server, then additional SM36 and SM37 codes need to be developed for those systems.

Unless…

…organizations enhance their SM36 and SM37 t-codes through Control‑M (self-hosted or SaaS). The Control-M suite orchestrates workflow execution across the entire enterprise IT environment. It is not system-specific and is widely used to execute highly complex, multi-element workloads across hybrid cloud environments. For SAP users, Control-M provides a complete view of all SAP jobs—and their dependent jobs—across the enterprise. Here’s a deeper look at how Control-M is used with SM36 and 37 to help overcome some of their limitations.

SM36 and SM37 keep systems running

SM36 and SM37 are great tools for scheduling and monitoring the thousands of jobs that run within SAP environments every day. These jobs can include making file transfers, updating databases, running reports, pushing updates to business users, and much more. SM36, SM37, and other t-codes are typically developed in SAP’s Advanced Business Application Programming (ABAP), so responsibility for creating and maintaining them goes to SAP experts, either in-house or at a service provider. As noted, SM36 and SM37 are system-specific. When a file transfer or other job type needs to run on different instances of SAP, or in different versions of SAP, separate t-codes need to be developed for each. If data is handed off from one system to another, or if another workflow similarly crosses different systems, then SM36 and SM37 jobs need to be developed to handle those situations. Such arrangements are standard operating procedure for many enterprises. The process works, although it is a bit resource-intensive and not very transparent.

There are a couple of situations where the skilled resource requirements associated with using SM36 and SM37 have historically caused some problems. One is when there is a job failure or unexplained delay. The other is when the job workflow needs to interact with systems outside of the one where it resides. Let’s look at each of these situations briefly.

SM36 and SM37 job performance problems or outright failures can be difficult to diagnose and debug in complex ecosystems because IT administrators need to look at the logs for all of the associated t-codes in each of the systems that the job touches. Many delays and failures result from one component of a workflow not being ready on time. For example, if a file transfer runs late, the job to run a report that needs data from the file can’t start. Such problems can be solved (prevented) with automated orchestration, but without it, they will need to be manually debugged.

Patch and upgrade installations are other risky periods for job failures. For organizations that use SM36 and SM37, every annual, monthly, and quarterly update or special patch requires manually stopping all inbound and outbound jobs for the upgrade, then restarting them once the update is complete. That’s time-consuming and puts business-critical processes at risk.

As systems become more complex, the chance of workflow failures increases because workflows need to be orchestrated across different systems. How well enterprises can orchestrate workflows depends on how much visibility they have into them. If you can’t see the status of a data stream or other dependent job until after the handoff occurs (or after it fails to occur as scheduled), then you don’t have the visibility you need to prevent failures.

How Control-M can help

Control-M adds the orchestration component to SM36 and SM37 job scheduling and monitoring. It addresses the leading problems associated with those tools by providing visibility across the breadth of the IT environment (helping to foresee and prevent dependency-related job failures) and drill-down depth into the jobs themselves, which enables prevention or fast remediation.

Control-M provides visibility into jobs across the entire SAP ecosystem, regardless of location. If you want to see or change the status of a workflow you go to one place—the Control-M user interface—instead of having to check multiple systems for the status of each step in the workflow. Control-M displays the overall status and gives you the option to drill down and see any dependent job. Users can see the status of an entire workflow with the same or less effort than it takes to do an SM37 check on a single component. That makes managing the SAP environment a lot less resource-intensive.

Here’s an example. System administrators typically investigate SM36 or SM37 job failures by running an ST22 dump analysis. Without Control-M, the logs and short dumps need to be manually intercepted. That procedure takes time and may need to be repeated across multiple SAP instances. Admins don’t need to pull and review logs if they use Control-M because it can automatically monitor logs with no manual intervention required. And, because of Control-M’s ability to automatically monitor all dependencies, a potential problem with one job can be quickly identified and addressed before it affects others. That way, Control-M increases uptime without requiring an increase in administrator support time.

The Italy-based energy infrastructure operator Snam S.p.A. reported it reduced workflow errors by 40 percent after introducing Control-M to its SAP environment, which included SAP ERP Central Component (ECC) and SAP S/4HANA. Freeing up its specialized SAP staff also helped the company improve its business services time to market by 20 percent. You can learn more about the program here.

These are just a few of the ways that orchestrating SM36 and SM37 codes through Control-M reduces complexity and saves time in day-to-day operations. Control-M also provides some other advantages during migrations to SAP S/4HANA or other versions, as this white paper explains.

Control-M is a great complement to SM36 and SM37 because it breaks down silos and works across the entire SAP environment, no matter how complex, and breaks the enterprise’s dependence on a few skilled SAP specialists to manually create and monitor all workflow jobs. Its self-service capabilities enable citizen development in a safe way, where IT retains control over workflows, but business users can handle their own requests. Above all, Control-M creates visibility over all SM36 and SM37 jobs and the entire SAP ecosystem.

]]>
Orchestrate ML Workflows: Retail Forecasting, Inventory Management in POS and Supply Chain https://www.bmc.com/blogs/orchestrating-ml-workflows-retail-inventory/ Mon, 22 Jul 2024 15:59:11 +0000 https://www.bmc.com/blogs/?p=53711 Predicting point-of-sale (POS) sales across stores, coordinated with inventory and supply chain data, is table stakes for retailers. This blog explains this use case leveraging PySpark for data and machine learning (ML) pipelines on Databricks, orchestrated with Control-M to predict POS and forecast inventory items. This blog has two main parts. In the first section, […]]]>

Predicting point-of-sale (POS) sales across stores, coordinated with inventory and supply chain data, is table stakes for retailers. This blog explains this use case leveraging PySpark for data and machine learning (ML) pipelines on Databricks, orchestrated with Control-M to predict POS and forecast inventory items. This blog has two main parts. In the first section, we will cover the details of a retail forecasting use case and the ML pipeline defined in Azure Databricks. The second section will cover the integration between Control-M and Databricks.

Developing the use case in Azure Databricks

Note: All of the code used in this blog is available at this github repo.

In real life, data would be ingested from sensors and mobile devices, with near-real-time inventory measurements and POS data across stores. The data and ML pipeline is coordinated with Control-M to integrate the different components and visualize the results in an always-updated dashboard.

The data lands in the Databricks Intelligent Data Platform and is combined, enriched, and aggregated with PySpark Jobs. The resulting data is fed to different predictive algorithms for training and forecasting sales and demand with the results visualized in:

  • Graphical dashboards
  • Written as delta files to a data repository for offline consumption

In this post, we will also walk through the architecture and the components of this predictive system.

Data set and schema

The project uses real-world data, truncated in size and width to keep things simple. A simplified and abridged version of the schema is shown in Figure 1. The location table is a reference table, obtained from public datasets. The color coding of the fields shows the inter-table dependencies. Data was obtained partially from Kaggle and other public sources.

schema

Figure 1. Example of schema.

Platform and components

The following components are used for implementing the use case:

  • Databricks Intelligent Data Platform on Azure
  • PySpark
  • Python Pandas library
  • Python Seaborn library for data visualization
  • Jupyter Notebooks on Databricks
  • Parquet and Delta file format

Project artifacts

  • Working environment on Azure
  • Code for data ingestion, processing, ML training, and serving and saving forecasted results to Databricks Lakehouse in delta format
  • Code for workflow and orchestration with Control-M to coordinate all the activities and tasks and handle failure scenarios

High-level architecture and data flow

Current architecture assumes that data lands in the raw zone of the data lakehouse as a csv file with a pre-defined schema as a batch. The high-level overview of the data flow and associated processes is showed in Figure 2.

Overview-of-data-flow-and-associated-processes

Figure 2. Overview of data flow and associated processes.

Data and feature engineering

Currently the data and ML pipelines are modeled as a batch process. Initial exploratory data analysis (EDA) was done to understand the datasets and relevant attributes contributing to predicting the inventory levels and POS sales. Initial EDA indicated that it is useful to transform the dates to “day of the week”- and “time of day”- type categories for best predictive results. The data pipelines included feature engineering capabilities for datasets that had time as part of the broader dataset.

Figure 3 shows a sample data pipeline for POS dataset. Figure 4 shows another similar data pipeline for an inventory dataset. Post data transformation, the transformed tables were joined to form a de-normalized table for model training. This is shown in Figure 5 for enriching the POS and the inventory data.

A sample data pipeline for POS dataset.

Figure 3. A sample data pipeline for POS dataset.

Sample data pipileine for inventory dataset

Figure 4. Sample data pipileine for inventory dataset.

The joining of transformed tables

Figure 5. The joining of transformed tables to form a de-normalized table for model training.

Model training

The ML training pipeline used random forest and linear regression to predict the sales and inventory levels. The following modules from PySpark were used to create the ML pipeline and do one-hot encoding on the categorical variables.

  • pyspark.ml.Pipeline
  • pyspark.ml.feature.StringIndexer, OneHotEncoder
  • pyspark.ml.feature.VectorAssembler
  • pyspark.ml.feature.StandardScaler
  • pyspark.ml.regression.RandomForestRegressor
  • pyspark.ml.evaluation.RegressionEvaluator
  • pyspark.ml.regression.LinearRegression

The enriched data was passed to the pipelines and the different regressor models were applied to the data to generate the predictions.

The RegressionEvaluator module was used to evaluate the results and the median absolute error (MAE), root mean squared error (RMSE), and R-squared metrics were generated to evaluate the predicted results. Feature weights were used to understand the contribution weights of each of the features to the predictions.

Orchestrating the end-to-end predictions

Data orchestration of the different PySpark notebooks uses a Databricks Workflows job while the production orchestration is performed by Control-M using its Databricks plug-in. This approach enables our Databricks workflow to be embedded into the larger supply chain and inventory business workflows already managed by Control-M for a fully automated, end-to-end orchestration of all related processing. Furthermore, it gives us access to advanced scheduling features like the ability to manage concurrent execution of multiple Databricks workflows that may require access to constrained shared resources such as public IP addresses in our Azure subscription.

Figure 6 shows the different orchestrated tasks to generate the offline predictions and dashboard. The orchestration was kept simple and does not show all the error paths and error handling paths.

Different orchestrated tasks

Figure 6. The different orchestrated tasks to generate the offline predictions and dashboard.

Control-M Integration with Databricks

Before creating a job in Control-M that can execute the Databricks workflow, we will need to create a connection profile. A connection profile contains authorization credentials—such as the username, password, and other plug-in-specific parameters—and enables you to connect to the application server with only the connection profile name. Connection profiles can be created using the web interface and then retrieved in json format using Control-M’s Automation API. Included below is a sample of the connection profile for Azure Databricks in json format. If you create the connection profile directly in json before running the job, it should be deployed using the Control-M Automation API CLI.

 Control-M Automation API CLI
Creating a Databricks job

The job in Control-M that will execute the Databricks workflow is defined in json format as follows:

{
"jog-databricks" : {
"Type" : "Folder",
"ControlmServer" : "smprod",
"OrderMethod" : "Manual",
"SiteStandard" : "jog",
"SubApplication" : "Databricks",
"CreatedBy" : "[email protected]",
"Application" : "jog",
"DaysKeepActiveIfNotOk" : "1",
"When" : {
"RuleBasedCalendars" : {
"Included" : [ "EVERYDAY" ],
"EVERYDAY" : {
"Type" : "Calendar:RuleBased",
"When" : {
"DaysRelation" : "OR",
"WeekDays" : [ "NONE" ],
"MonthDays" : [ "ALL" ]
}
}
}
},
"jog-azure-databricks-workflow" : {
"Type" : "Job:Azure Databricks",
"ConnectionProfile" : "JOG-AZ-DATABRICKS",
"Databricks Job ID" : "674653552173981",
"Parameters" : "\"params\" : {}",
"SubApplication" : "Databricks",
"Host" : "azureagents",
"CreatedBy" : "[email protected]",
"RunAs" : "JOG-AZ-DATABRICKS",
"Application" : "jog",
"When" : {
"WeekDays" : [ "NONE" ],
"MonthDays" : [ "ALL" ],
"DaysRelation" : "OR"
},
"AzEastPublicIPs" : {
"Type" : "Resource:Pool",
"Quantity" : "8"
}
}
}
}

Running the Databricks workflow

To run the job we will use the run service within Automation API.

Databricks workflow

Visualizing the workflow

Databricks workflow executing

Figure 7. Databricks workflow executing in the Control-M monitoring domain.

Output of completed Databricks job in Control-M

Output of completed Databricks

Figure 8. Output of completed Databricks job in Control-M.

Task workflow in Azure Databricks

Task workflow in Azure Databricks

Figure 9. Task workflow in Azure Databricks.

Graph view of task workflow in Azure Databricks

Graph view of task workflow

Figure 10. Graph view of task workflow in Azure Databricks.

Outputs and visualization

Two forms of output were generated during the project.

  • Predicted results from the model for POS sales and inventory predictions needed based on demand—these were stored as Delta format files on the lakehouse for offline viewing and analysis.
  • Visualizations of the feature weights that contributed to the predictions for POS and inventory data for both random forest and linear regression algorithms.

The four figures below show the feature weights for each of the above algorithms across the different features for POS and inventory predictions from the feature engineered attributes.

Linear regression POS feature weight

Figure 11. Linear regression POS feature weight.

Linear regression inventory feature weight

Figure 12. Linear regression inventory feature weight.

Random forest inventory top 20 features by importance

Figure 13. Random forest inventory top 20 features by importance.

Random forest POS feature top 20 by importance

Figure 14. Random forest POS feature top 20 by importance.

Conclusion

This blog demonstrates an ML use case for forecasting of sales and inventory. The ML workflow is likely to be part of a larger orchestration workflow in Control-M. where it is interdependent on workflows running in POS and inventory management applications. However, in this blog, we have maintained the focus on the ML workflow in Databricks and its integration and execution through Control-M.

]]>
Why You Should Modernize Your Orchestration Platform With Your SAP S/4HANA® Migration https://www.bmc.com/blogs/orchestration-s4hana-migration/ Wed, 05 Jun 2024 13:31:03 +0000 https://www.bmc.com/blogs/?p=53632 As SAP®’s 2027 deadline for discontinuing support for its legacy enterprise resource planning (ERP) and other modules approaches, the need to plan and enact the transition to SAP S/4HANA gets more urgent. Missing the transition deadline could have long-lasting ramifications such as additional maintenance costs and less inclusive approaches. By anticipating the chaos and consequences […]]]>

As SAP®’s 2027 deadline for discontinuing support for its legacy enterprise resource planning (ERP) and other modules approaches, the need to plan and enact the transition to SAP S/4HANA gets more urgent. Missing the transition deadline could have long-lasting ramifications such as additional maintenance costs and less inclusive approaches. By anticipating the chaos and consequences of this forced migration now, businesses will be better prepared to maintain business continuity and de-risk their transition to SAP S/4HANA and other cloud products.

But even those businesses that start the migration process as soon as possible will face significant challenges. In fact, the more well-established and entrenched in SAP ERP Central Component (ECC) the company is, the more challenging the transition becomes. There are many reasons this might be the case, including:

Integrations—Over the years, companies accumulate a growing number of integrations with their SAP systems. Ongoing management of all these integrations can be increasingly difficult and can detract from a company’s ability to properly focus on their migration project.

Complex landscape—Often, organizations have complex system landscapes that may include multiple ERPs, add-ons, non-SAP systems, and custom scripting. If any of the jobs and processes associated with these systems break during migration, business can grind to a halt, costing the company valuable time and money.

No plan for ongoing automation—As organizations mature, they tend to find what works and then rarely deviate from the status quo. This is especially true when it comes to automation in their SAP environments. Organizations update and plan their automation strategies with a focus on continuing to run their automation the way it has always been done. During migration, this lack of forward thinking can lead to a break in automation as the company moves to SAP S/4HANA.

Given the size of the lift required for migration to SAP S/4HANA, there is little time left for those companies that haven’t begun to define their application and workflow orchestration strategy for SAP S/4HANA if they want to avoid any disruption to their business. Migration will ultimately only be successful if workflows run reliably, improve day-to-day work for business users, and support the enterprise’s plans for innovation. Therefore, a modern workflow orchestration platform is key to unlocking the full potential of SAP S/4HANA.

Organizations can’t assume their current workflows will perform at par after the transition because the new environment will likely have new complexities (such as the need for multi-cloud orchestration) that impact workflow performance. Therefore, companies should modernize their orchestration platform concurrently with their SAP S/4HANA migration.

As an SAP-certified solution, Control-M creates and manages SAP ECC, SAP S/4HANA, SAP BW, and data archiving jobs, and supports any application in the SAP ecosystem, eliminating time, complexity, and specialized knowledge requirements. It provides out-of-the-box visibility to all workflows across SAP and non-SAP source systems and de-risks the transition to SAP S/4HANA. With Control-M, users can achieve full compliance, security, and governance across business processes, automation, and integration for key tasks during upgrades and migrations to SAP S/4HANA. Both the self-hosted and SaaS versions of Control-M are SAP Certified for Integrations with RISE with SAP S/4HANA Private Cloud.

Control-M provides customers many valuable benefits whether they’re planning out their migration path or they’re in the process of migration to SAP S/4HANA.

Reduced project time and upgrade costs

Control-M is a valuable ally in SAP S/4HANA migration projects, empowering organizations with streamlined workload management, enhanced visibility, automation capabilities, and flexibility to ensure successful and efficient migration journeys. All pre- and post-migration automation with Control-M helps the organization continue smooth business operations, easing the process overall. The SAP S/4HANA migration project costs are expensive and incurred across the organization. Costs include planning time, resource allocation, licenses, strategy, and roadmap development and implementation. The business is the sponsor of these capital projects. If all scoping and planning project deliverables are defined and agreed upon with the steering committee, then complete modernization with Control-M can be done along with the SAP S/4HANA project. The result is a reduction in the number of subsequent upgrade projects, and the savings can be applied to other parts of the business.

Reduced integration complexity

Control-M is designed to seamlessly integrate with contemporary ERP systems like SAP S/4HANA. By modernizing your orchestration platform with Control-M, you can ensure smoother data flows and communication between various systems and applications in your enterprise ecosystem. With proper, accurate planning and defining of all project deliverables and their dependencies, Control-M provides a complete integration view along with the upgrade project.

Complete automation and increased operational efficiency

Control-M offers advanced automation capabilities such as workflow automation, event-driven architecture, and graphical scheduling and monitoring tools, along with analytics through Workflow Insights. By leveraging these automation features alongside your SAP S/4HANA migration, you can streamline business processes, reduce manual effort, and increase operational efficiency. Control-M provides real-time visibility into your enterprise operations, allowing you to monitor and manage processes, transactions, and data flows across your organization. By integrating Control-M with the SAP S/4HANA migration project, you can gain comprehensive insights into your ERP processes and transactions, enabling better decision-making and governance.

Technology stack alignment and scalability

With Control-M, you can align with the IT infrastructure roadmap and ensure that it supports your organization’s growth and expansion and can be easily scaled to leverage new functionalities introduced by SAP S/4HANA. Control-M provides scalability and flexibility options that can be utilized by the business, paving the road for a strong foundation for all business needs and technological advancements. Control-M sets a foundation to start other upgrades or innovation projects and provides a platform and structure where project managers already have that dependency cleared. That way, they can focus on other innovative projects. Control-M serves as a prerequisite for other dependent integration projects, saving the company time and money and expediting other projects that are in the pipeline.

Conclusion

By modernizing your orchestration platform with Control-M during the SAP S/4HANA migration project, you can accelerate the time to value of your digital transformation initiatives. Control-M typically offers pre-built job types, templates, and integration capabilities that facilitate rapid deployment and configuration, enabling you to achieve faster return on investment (ROI) and business outcomes. Control-M (self-hosted and SaaS) are modern workflow orchestration solutions because of their support for hybrid self-hosted and SaaS environments, their ability to orchestrate complex data pipelines and support shift-left user self-service for workflow management, and their DevOps-friendly Jobs-as-Code approach that makes it easy to build automation and increased reliability into workflows.

Control-M can serve as comprehensive platforms because of their many integrations. As noted, the solutions can support all SAP versions and job types. They also support many other enterprise workflows and have more than 100 native integrations to popular tools, including Amazon Web Services (AWS), Azure, and Google Cloud (and many of their components), Oracle, Informatica, SQL, Red Hat, Kubernetes, Apache Airflow, Hadoop, Spark, Databricks, UiPath, OpenText (Micro Focus), Alteryx, and many more.

A migration to SAP S/4HANA will ultimately only be successful if workflows run reliably, improve day-to-day work for business users, and support the enterprise’s plans for innovation. Therefore, a modern workflow orchestration platform is key to unlocking the full potential of SAP S/4HANA.

To learn more about how Control-M can improve your SAP S/4HANA migration and other SAP workflows, visit our website!

]]>
Unlock Your Data Initiatives with DataOps https://www.bmc.com/blogs/unlock-data-initiatives-with-dataops/ Mon, 20 May 2024 15:55:20 +0000 https://www.bmc.com/blogs/?p=53593 Across every industry, companies continue to put increased focus on gathering data and finding innovative ways to garner actionable insights. Organizations are willing to invest significant time and money to make that happen. According to IDC, the data and analytics software and cloud services market reached $90 billion in 2021 and is expected to more […]]]>

Across every industry, companies continue to put increased focus on gathering data and finding innovative ways to garner actionable insights. Organizations are willing to invest significant time and money to make that happen.

According to IDC, the data and analytics software and cloud services market reached $90 billion in 2021 and is expected to more than double by 2026 as companies continue to invest in artificial intelligence and machine learning (AI/ML) and modern data initiatives. It is worth noting that a significant amount of data storage, processing, and insights is happening in the cloud, given the elastic compute and storage capabilities available.

However, despite high levels of investment, data projects can often yield lackluster results. A recent survey of advanced major analytics programs by McKinsey found that companies spend 80 percent of their time doing repetitive tasks such as preparing data, where limited value-added work occurs. Additionally, they found that only 10 percent of companies feel they have this issue under control.

So why are data project failure rates so high despite increased investment and focus?

Many variables can impact project success. Often cited factors include project complexity and limited talent pools. Data scientists, cloud architects, and data engineers are in short supply globally. Companies are also recognizing that many of their data projects are failing because they struggle to operationalize the data initiatives at scale in production.

Unlocking data with DataOps

This has led to the emergence of DataOps as a new framework to overcoming common challenges. DataOps is the application of agile engineering and DevOps best practices to the field of data management to help organizations rapidly turn new insights into fully operationalized production deliverables that unlock business value from data.

The number of organizations adopting DataOps practices to help them unlock their data is increasing exponentially, so much so that analyst firms have started tracking DataOps tools as a market.

In 2022, industry analyst Gartner® published the Market Guide for DataOps Tools, in which it provided this market definition:

“DataOps tools provide greater automation and agility over the full life cycle management of data pipelines in order to streamline data operations. The core capabilities of a DataOps tool include:

  • Orchestration: Connectivity, workflow automation, lineage, scheduling, logging, troubleshooting, and alerting
  • Observability: Monitoring live/historic workflows, insights into workflow performance and cost metrics, impact analysis
  • Environment Management: Infrastructure as code, resource provisioning, environment repository templates, credentials management
  • Deployment Automation: Version control, release pipelines, approvals, rollback, and recovery
  • Test Automation: Business rules validation, test scripts management, test data management”

As the Gartner market definition indicates, orchestration of data pipelines is a key element of DataOps capabilities. However, data workflow orchestration comes with its own set of challenges.

Data orchestration challenges

Most data pipeline workflows are immensely complex and run across many disparate applications, data sources, and infrastructure technologies that need to work together. While the goal is to automate these processes in production, the reality is that without a powerful workflow orchestration platform, delivering these projects at enterprise scale can be expensive and often requires significant time spent doing manual work.

Data workflow orchestration projects have four key stages: ingestion, storage, processing, and delivering insights to make faster and smarter decisions.

Data-projects-have-four-stages-with-many-moving-parts-across-multiple-technologies

Figure 1. Data projects have four stages with many moving parts across multiple technologies.

Ingestion involves collecting data from traditional sources like enterprise resource planning (ERP) and customer resource management (CRM) solutions, financial systems, and many other systems of record in addition to data from modern sources like devices, Internet of Things (IoT) sensors, and social media.

Storage increases the complexity with numerous different tools and technologies that are part of the data pipeline. Where and how you store data depends a lot on persistence, the relative value of the data sets, the refresh rate of your analytics models, and the speed at which you can move the data to processing.

Processing has many of the same challenges. How much pure processing is needed? Is it constant or variable? Is it scheduled, event-driven, or ad hoc? How do you minimize costs? The list goes on and on.

Delivering insights requires moving the data output to analytics systems. This layer is also complex, with a growing number of tools representing the last mile in the data pipeline.

With new data and cloud technologies being frequently introduced, companies are constantly reevaluating their tech stacks. This evolving innovation creates pressure and churn that can be challenging because companies need to easily adopt new technologies and scale them in production. Ultimately, if a new data analytics service is not in production at scale, companies are not getting actionable insights or achieving value.

Achieving production at scale with the right platform

Successfully running business-critical workflows at scale in production doesn’t happen by accident. The right workflow orchestration platform can help you streamline your data pipelines and get the actionable insights you need. That makes finding the right workflow orchestration platform vital.

With that in mind, here are eight essential capabilities to look for in your workflow orchestration platform:

  1. Support heterogeneous workflows: Companies are rapidly moving to the cloud, and for the foreseeable future will have workflows across a highly complex mix of hybrid environments. For many, this will include supporting the mainframe and distributed systems across the data center and multiple private and/or public clouds. If your orchestration platform cannot handle the diversity of applications and underlying infrastructure, you will have a highly fragmented automation strategy with many silos of automation that require cumbersome custom integrations to handle cross-platform workflow dependencies.
  2. Service level agreement (SLA) management: Business workflows, ranging from ML models predicting risk to financial close and payment settlements, all have completion SLAs that are sometimes governed by guidelines set by regulatory agencies. Your orchestration platform must be able to understand and notify you of task failures and delays in complex workflows, and it needs to be able to map issues to broader business impacts.
  3. Error handling and notifications: When running in production, even the best-designed workflows will have failures and delays. It is vital that the right teams are notified so that lengthy war room discussions just to figure out who needs to work on a problem can be avoided. Your orchestration platform must automatically send notifications to the right teams at the right time.
  4. Self-healing and remediation: When teams respond to job failures within business workflows, they take corrective action, such as restarting a job, deleting a file, or flushing a cache or temp table. Your orchestration platform should enable automation engineers to configure such actions to happen automatically the next time the same problem occurs.
  5. End-to-end visibility: Workflows execute interconnected business processes across hybrid tech stacks. Your orchestration platform should be able to clearly show the lineage of your workflows. This is integral to helping you understand the relationships between applications and the business processes they support. This is also important for change management. When making changes, it is vital to see what happens upstream and downstream from a process.
  6. Self-service user experience (UX) for multiple personas: Workflow orchestration is a team sport with many stakeholders such as data teams, developers, operations, business process owners, and more. Each team has different use cases and preferences for how they want to interact with the orchestration tools. This means your orchestration platform must offer the right user interface (UI) and UX for each team so they can benefit from the technology.
  7. Production standards: Running workflows in production requires adherence to standards, which means using correct naming conventions, error-handling patterns, etc. Your orchestration platform should have a mechanism that provides a very simple way to define such standards and guide users to the appropriate standards when they are building workflows.
  8. Support DevOps practices: As companies adopt DevOps practices such as continuous integration and continuous deployment (CI/CD) pipelines, the workflow development, modification, and even infrastructure deployment of workflows, your orchestration platform should be able to fit into modern release practices.

Control-M and BMC Helix Control-M

DataOps tools and methodologies can help you make the best use of your data investment. But if you want to succeed in your DataOps journey, you must be able to operationalize the data. Control-M (self-hosted) and Helix Control-M (SaaS) provide a layer of abstraction to simplify the orchestration of complex data pipelines. These application and data workflow orchestration platforms enable end-to-end visibility and predictive SLAs across any data technology or infrastructure.

Control-M is a layer of abstraction to simplify complex data pipelines

Figure 2. Control-M is a layer of abstraction to simplify complex data pipelines.

Control-M and Helix Control-M can help you orchestrate your data pipelines, put your data to effective use, and improve your data-driven business outcomes. Both platforms are used by thousands of companies globally and are proven to help companies run data pipeline workflows in production at scale.

Here are some examples of the robust capabilities Control-M and Helix Control-M have and how they can help you streamline your data pipeline workflow orchestration:

Robust integrations
The tools required to run a modern business vary widely. Often, each department utilizes its own technologies, requiring manual scripting to connect workflows across the business. Control-M and Helix Control-M feature a vast library of out-of-the-box integrations that allow businesses to orchestrate the latest technologies.

SLA management and impact analysis
With Control-M and Helix Control-M, you can track the status of business service levels along with corresponding workflows, so you know exactly how business services are performing at any given time. The two platforms can predict that a service will be late if a job is delayed or has failed upstream because they are using historical data to calculate how long a downstream job usually takes to run. Using this data, they can notify stakeholders not only that a particular job is late, but which business services are at risk of being delayed.

Python client
Many teams within an organization need to interact with your workflow orchestration platform for various reasons. Developers are a particularly important stakeholder in the orchestration process. They develop the applications that will run in production and be orchestrated by Control-M and Helix Control-M. The Python client allows developers to natively invoke their functions from their Python code.

Visibility for business users
Business users are an important stakeholder, as well. They are ultimately responsible for the timely delivery of the services they own. With the Control-M mobile app and web interface, they can track the status of their workflows anytime, from anywhere, without having to contact the application teams or operations for status updates.

The need for data is on the rise and shows no signs of abating, which means that having the ability to store, process, and operationalize that data will remain crucial to the success of any organization. DataOps practices backed by the powerful data orchestration capabilities of Control-M and Helix Control-M can help you orchestrate data pipelines, streamline the data delivery process, and improve business outcomes.

  1. *Market Guide for DataOps Tools; December 5, 2022; Robert Thanaraj, Sharat Menon, Ankush Jain

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

To learn more about how Control-M and Helix Control-M can help you deliver data-driven outcomes faster, visit our website (Control-M/Helix Control-M).

]]>
Lessons Learned and Shared for Enabling Enterprise Self-Service Workflow Orchestration https://www.bmc.com/blogs/bmc-on-bmc-helixcontrolm-self-service/ Fri, 05 Apr 2024 14:50:50 +0000 https://www.bmc.com/blogs/?p=53501 Many of BMC’s day-to-day operations run on our own solutions, which keeps us operating efficiently and gives us essential insights into our customers’ challenges and how we can improve our offerings. We call this approach BMC on BMC, and BMC Helix Control-M is a big part of it. As we’ve highlighted in previous blogs, Control-M […]]]>

Many of BMC’s day-to-day operations run on our own solutions, which keeps us operating efficiently and gives us essential insights into our customers’ challenges and how we can improve our offerings. We call this approach BMC on BMC, and BMC Helix Control-M is a big part of it. As we’ve highlighted in previous blogs, Control-M orchestrates the thousands of workflows that keep BMC running daily and has produced many benefits for various business units, including finance, the enterprise data warehouse team, customer support, sales operations, marketing and more. Some of their reported results include $27 million in recurring cost avoidance savings through application and data workflow orchestration; 40–50 days saved annually in sales operations; automated generation of key executive reports, which eliminated the need for one manager to do it weekly; a significantly streamlined quarterly close process; and more.

The automation and operational dashboards that Control-M provides have been great for our business. As word of these successes spread, more business users came forward with their ideas for new use cases. In addition to using Control-M on our own projects, another thing BMC had in common with many of our customers is that our workflow development was centralized and the Information Systems and Technology (IS&T) department was the gatekeeper. It became very challenging for our IS&T operations team to balance keeping our systems running and meeting the demand for new services.

This situation is common and is a leading driver for the citizen development movement, where companies are giving their non-IS&T business users the autonomy to create their services. Flipping the development model from centralized to decentralized can break development logjams, but it carries risks. Workflows today are more complex and have more dependencies than ever, making security and governance harder to maintain. Citizen developers can’t be expected to account for all the variables that could cause new services they envision to cause other enterprise workflows to crash or open other vulnerabilities.

We in IS&T operations understood these risks. We also understood that we needed to embrace citizen development to keep the company agile. That understanding formed the foundation of our automation-as-a-service (AaaS) program. With BMC Helix Control-M at the center, democratizing data and giving users the tools they needed to orchestrate workflows and business services became easy. It also helped us mitigate new risks and handle governance blind spots.

Before implementing decentralized workflow development and orchestration, our business users dedicated significant time submitting requests, while IS&T invested substantial effort in follow-up and development. However, with the introduction of AaaS, the landscape transformed. Business users are now efficiently deploying more services into production, automation processes have become streamlined, and IS&T resources have been liberated to concentrate on innovation. Let’s delve into a comparison of the processes before and after this transformation.

Before AaaS

It typically took one to three days for a business user to complete a request for a new service. Because requests involve using data from multiple systems, the requesting business user needed to find and contact numerous system administrators to request permission to access various applications and their data, which meant opening multiple tickets to support a single job request.

The IS&T operations team found themselves inundated with an escalating volume of automation requests from different departments across the business. Each request required careful evaluation, leading to a decision of approval or rejection, followed by the development of workflows for those that were accepted. Upon acceptance, our team undertook the entire development process, encompassing integration creation, extensive testing, conflict resolution with existing workflows, identification of security vulnerabilities, and the subsequent deployment of services into production. Development timelines fluctuated significantly depending on the complexity of each task. Leveraging Control-M proved instrumental, as it automated numerous development and execution tasks while offering a plethora of pre-built integrations.

Introducing AaaS via BMC Helix Control-M has revolutionized our workflow request and development procedures. By leveraging BMC Helix Control-M, we’ve significantly streamlined the formerly time-consuming, labor-intensive, and costly processes associated with requesting and developing workflows. We’ve automated the entire workflow lifecycle, from initial request submission (ticketing) through follow-ups for missing information to decision-making regarding approval or rejection of new business services. Moreover, we’ve optimized environment provisioning and provided users whose projects were approved with tailored training via learning paths. What previously took users several days to request services now takes hours. Furthermore, our learning paths actively encourage users to explore and utilize the modern features of BMC Helix Control-M, thus accelerating the automation process even further.

Providing AaaS with BMC Helix Control-M

We’re achieving even greater efficiencies in workflow development, primarily because users are taking the lead, requiring minimal intervention from IS&T for each workflow. Thanks to BMC Helix Control-M, users can access the tools and integrations to construct workflows seamlessly, facilitated by an intuitive interface. Within this framework, business users enjoy remarkable flexibility in designing automation and other workflows that streamline their tasks. BMC Helix Control-M automatically implements guardrails, preventing user-generated workflows from disrupting others. Our framework logically isolates jobs, mitigating interference between them, with much of this functionality operating seamlessly behind the scenes. The solution also ensures data security with the built-in data protection features that safeguard sensitive information.

Most business users have crafted their workflows use a graphical user interface (GUI) and the Jobs-as-Code methodology. They leverage BMC Helix Control-M’s user-friendly interface and extensive library of pre-built integrations. Additionally, we offer users a token for accessing pre-approved code on GitHub. To further support users, we’ve developed a learning path that enables them to explore BMC Helix Control-M’s capabilities.

With this solution, IS&T no longer creates automation and workflows; instead, the business users are responsible. Our task is to provide them with the necessary resources and ensure the smooth operation of the overall system, with BMC Helix Control-M handling most of the automation processes.

automation

Figure 1. When it comes to automation, a lot of work goes on behind the scenes.

AaaS architecture:

The image above represents the architecture behind AaaS. There are many moving parts, but at a high level, the architecture represents the following process:

  • Citizen developers use the current request mechanism within BMC Helix Digital Workplace to initiate requests.
  • The automated workflows commence once approval is obtained, and the security team has integrated the required group into OKTA. We aim to automate the OKTA process soon as part of our ongoing enhancements.

Through its Automation API, we’ve seamlessly integrated BMC Helix Control-M into our DevOps toolchain, comprising Bitbucket and HashiCorp Vault. This integration facilitates the provisioning of a secure swim lane within BMC Helix Control-M for the citizen developer, aligning with our enterprise standards for building workflows.

BMC-Helix-Control-Ms-Automation-API

Figure 2. BMC Helix Control-M’s Automation API allowed us to create a secure swim lane for citizen developers within our DevOps toolchain.

While self-service and citizen development aim to empower business users, basing the program on BMC Helix Control-M has greatly benefited our IS&T and operations teams. As we’ll elaborate in an upcoming blog post, our use of the application and data workflow orchestration platform has substantially reduced the time burden on the IS&T department by automating a considerable portion of the provisioning and securing environments, enabling business users to create their workflows more efficiently.

One of our notable successes, which we encourage other organizations to adopt as a best practice, involves the creation of a dashboard designed to monitor citizen development projects organization-wide. A recent snapshot from this dashboard is depicted in the image below.

The dashboard calculates the business value derived from developed workflows and automations, providing an ongoing tally. For instance, one business unit has identified $27 million in cost avoidance through its self-developed workflows, with the value increasing with each execution. Such metrics aid us in decision-making regarding request approvals and prioritizations. It is crucial for any initiative to track adoption and usage metrics and to address this need, a dashboard has been established that aggregates data from BMC Helix Control-M and other systems for comprehensive tracking. The screenshot below offers a preview of this dashboard.

Company-wide citizen development metrics dashboard.

Figure 3. Company-wide citizen development metrics dashboard.

“BMC on BMC” isn’t a temporary pilot or project with a predetermined end date or a specific quota of workflows to be developed. It’s an ongoing endeavor that continues to expand its influence across our entire international organization. Over half of our employees have used self-service to access our enterprise data warehouse (EDW), and the impact of user-developed workflows touches every employee in some capacity.

From this experience, we’ve learned some crucial insights:

  1. Enterprises must decentralize workflow development to effectively manage service requests and drive innovation.
  2. Decentralization should not entail compromises in workflow security, reliability, or governance.
  3. Scaling decentralized development and its governance mandates automation as the essential pathway forward.

While BMC Helix Control-M played a crucial enabling role, the collective efforts of many individuals were instrumental in achieving these process improvements. We advise customers aspiring for citizen development to recognize the significance of integrating a robust change management component into their programs. Self-service, citizen development, and advanced automation signify novel approaches to work. Both business users and IS&T professionals must be prepared for these changes. Embracing structured change management is one of our most valuable lessons learned.

While we’ve made significant strides and achieved numerous milestones since our inception, for those deeply engaged, we know that we’ve merely begun to tap into the potential of automation and self-service at BMC. We view it as a continuous journey and aspire to democratize self-service workflow automation for all. In my upcoming blog posts, I’ll delve into the crucial architecture and process of our approach to achieving this vision.

To learn more about BMC Helix Control-M, click here.

]]>
Simplify CSP Data Initiatives with Control-M https://www.bmc.com/blogs/simplify-csp-data-ctm/ Fri, 02 Feb 2024 05:59:29 +0000 https://www.bmc.com/blogs/?p=53418 In today’s hyper-connected digital-first world, having reliable phone, internet, and television services is non-negotiable. That means communications service providers (CSPs) must remain on the cutting edge of technology and maintain stellar customer relationships to stay competitive. They do this by leveraging massive amounts of data generated from sources like subscriber information, call detail records, and […]]]>

In today’s hyper-connected digital-first world, having reliable phone, internet, and television services is non-negotiable. That means communications service providers (CSPs) must remain on the cutting edge of technology and maintain stellar customer relationships to stay competitive. They do this by leveraging massive amounts of data generated from sources like subscriber information, call detail records, and sales.

The need to operationalize this data puts CSP data and analytics teams on a critical mission: To find ways to use insight-based analytics to support business transformation and create competitive advantages. The executive pressure behind it is strong. CSP data architects and their teams often struggle with deciding which data is needed and how it can be acquired, ingested, aggregated, processed, and analyzed so they can deliver the insights the business demands. Data isn’t a project—it’s a journey, and one that often comes without a roadmap.

Delivering data and analytics capabilities with the scope and scale CSPs need requires the flexibility to accommodate disparate data sources and technologies across varying infrastructure, both on-premises and in the cloud. To meet the demands of executives and business conditions, companies need a robust application and data workflow orchestration platform and strategy. This helps CSP organizations orchestrate essential tasks across the complete data lifecycle, so they can coordinate, accelerate, and operationalize their business modernization initiatives.

One of the biggest challenges on the data journey is not letting all the details and decisions about architecture, tools, processes, and integration distract from discovering how to deliver valuable insights and services across the organization.

All too commonly, organizations get bogged down by foundational data questions like:

Do we have the right framework to manage data pipelines?

What are the best options for feeding new data streams into our systems?

How can we integrate disparate technologies?

How can we leverage our existing systems of record?

Where should our data systems run?

The list goes on and on. As they try to find answers, companies can lose sight of the overall goal of creating systems that will provide better insight and improve decision-making. The details are essential, but so is staying focused on the big picture. The less time planners need to spend on the details of how data will be managed, the more they can focus on finding value and insight in their data.

To deal with the complexity, CSPs need industrial-strength application and data workflow orchestration capabilities. Many tools can orchestrate data workflows. Some of them—such as Apache Airflow—are open source. However, most of those tools are platform-specific, targeting specific personas to perform specific tasks. So, multiple tools must be cobbled together to orchestrate complex workflows across multi-cloud and hybrid environments.

End-to-end orchestration is essential for running data pipelines in production and an organization’s chosen platform must be able to support disparate applications and data on diverse infrastructures. Control-M (self-hosted and SaaS) does that by providing flexible application and data workflow orchestration for every stage of the data and analytics journey, operationalizing the business modernization initiatives every organization is striving to achieve. It offers interfaces tailored to the many personas involved in facilitating complex workflows, including IT operations (ITOps), developers, cloud teams, data engineers, and business users. Having everyone collaborating on a single platform, operating freely within the boundaries implemented by ITOps, speeds innovation and reduces time to value.

Control-M expedites the implementation of data pipelines by replacing manual processes with application and data integration, automation, and orchestration. This gives every project speed, scalability, reliability, and repeatability. Control-M provides visibility into workflows and service level agreements (SLAs) with an end-to-end picture of data pipelines at every stage, enabling quick resolution of potential issues through notification and troubleshooting before deadlines are missed. Control-M can also detect potential SLA breaches through forecasting and predictive analytics that prompt focused human intervention on specific remedial actions to prevent SLA violations from occurring.

Data pipeline orchestration offers CSPs unique opportunities to improve their business by operationalizing data. For example, CSPs can reduce customer churn by leveraging data to identify signals and patterns that indicate potential issues. With that analysis, they can proactively target at-risk customers with retention campaigns and personalized offers. Additionally, CSPs can utilize customer data to optimize pricing, provide targeted promotions to customers, and deliver excellent customer experiences.

Case study

A major European CSP and media conglomerate utilizes Control-M throughout its business to harness the power of data. With more than 12 million customers, the company collects a staggering six petabytes of customer data per night, including viewing habits from television cable boxes, mobile network usage information, and website traffic. Using this information, it creates a 360-degree view of each customer. That means its customer information is never more than 15 minutes out of date, allowing it to provide the best customer service possible. In addition, this information is used to deliver targeted advertising so that each customer sees what is most relevant to their interests.

Control-M manages and orchestrates the entire data science modeling workflow end to end, both on-premises and in the cloud, through technologies including Google Cloud Platform (GCP), BigQuery, DataBricks, and many more. With Control-M, the CSP can use this massive amount of data to understand its customers, provide an optimized customer experience, slash cancellations, and help create new revenue streams.

Conclusion

Turning data and analytics into insights and actions can feel impossible—especially with the massive amount of data generated by a CSP. Control-M orchestrates and automates data pipelines to deliver the insights your organization needs.

Control-M helps CSPs orchestrate every step of a data and analytics project, including ingesting data to your systems, processing it, and delivering insights to business users and other teams that need to better utilize the refined data. It also brings needed consistency and integration between modern and legacy environments. The benefit to this integration and automation is that you can operationalize data to modernize your business, innovate faster, and deliver data initiatives successfully.

To learn more about how Control-M can help you improve your business outcomes visit our website.

]]>
Simplifying Complex Mainframe Migration Projects with Micro Focus and Control‑M https://www.bmc.com/blogs/mainframe-migration-micro-focus-controlm/ Mon, 11 Dec 2023 18:23:37 +0000 https://www.bmc.com/blogs/?p=53333 In today’s complex and rapidly evolving digital landscape, organizations recognize business modernization as a key priority to drive growth and innovation. They are embracing a culture of agility and responsiveness that leverages both emerging technologies and tools and agile application and data pipeline development methods to foster customer-centric solutions. One area of enterprise digital modernization […]]]>

In today’s complex and rapidly evolving digital landscape, organizations recognize business modernization as a key priority to drive growth and innovation. They are embracing a culture of agility and responsiveness that leverages both emerging technologies and tools and agile application and data pipeline development methods to foster customer-centric solutions.

One area of enterprise digital modernization currently getting a lot of attention is the mainframe. As part of their modernization efforts, mainframe organizations are evaluating migrating applications to newer platforms like cloud and containers.

However, while the lure of more agility and flexibility for mainframe applications is a strong incentive, companies face significant risks because these changes can potentially impact critical business services. They’re also seeking to capitalize on these new technology investments while still preserving the knowledge and expertise embedded within existing applications, processes, and workflows that deliver these business-critical outcomes.

Consequently, most organizations with mainframe pursue a strategic migration approach that balances the pace of transition while still retaining the existing components that continue to deliver value and selectively migrating others that are better served by modern environments.

To manage this balancing act, mitigate operational risks, and execute a smooth journey to the desired state, companies rely on two core technology platforms—Control-M, BMC’s application and data workflow orchestration platform, and Micro Focus Enterprise Server, Open Text’s solution for mainframe application replatforming.

Migrating application and data workflows

While mainframe systems are well-known for powering real-time transactions, the majority of their workloads actually run as batch. These batch jobs are integrated into workflows to deliver essential business outcomes such as supply chain execution, customer billing, payments, and end-of-period closing. Over time, the workflows have evolved, often becoming hybrid and complex in nature as they incorporate modern infrastructure and data technologies. While evolving, they have also been adapted to company processes and standards, accumulating best practices, insights, and institutional knowledge.

Migrating these critical workflows of tightly interconnected applications and data sources, and maintaining the associated institutional knowledge, is a key challenge in almost every mainframe application migration. Control-M is an ideal platform in this context. It ensures the integration of mainframe and migrated applications alongside other technologies on distributed systems, cloud, and container platforms. It also preserves built-in knowledge, processes, and standards, and enables a smooth, no-risk migration at the speed the organization desires.

In addition, Micro Focus Enterprise Server is a high-performance, scalable deployment environment that allows applications traditionally run on IBM® mainframes to be moved to other platforms (replatformed), including distributed systems, cloud, and containers, with only minor adjustments.

Managed transformation

Control-M has recently delivered a Micro Focus Enterprise Server integration that enables the centralized orchestration of Micro Focus jobs alongside other application and data workflows. This integration supports managing Micro Focus jobs through Control-M’s interfaces, leveraging the same advanced orchestration capabilities used across mainframe jobs, file transfers, enterprise resource planning (ERP) solutions, data sources, and cloud and container services.

Control-M’s integration with Micro Focus Enterprise Server, coupled with BMC’s migration tools and support team expertise, positions the combined solution as the go-to migration resource and capability for mainframe modernization projects.

Control-M can easily replace mainframe applications with replatformed Micro Focus jobs, maintaining dependencies, workflow structure, built-in knowledge, and adherence to processes and standards. The change is operationally transparent, as Control-M continues to provide holistic visibility and standardized management of application and data workflows across source and destination environments, delivering consistent business outcomes and value as applications and the landscape evolve.

For existing Control-M customers, replatforming from mainframe to distributed or on-premises servers and/or cloud environments is guided, simple, and secure. The BMC Services organization, including its global partner network, is available to assist customers in migrating mainframe workflows through complete or selective replacement with Micro Focus jobs, providing continuously updated migration tools and sharing their expertise in converting workflows between platforms.

The BMC Services team follows a proven methodology with four key phases:

  • Planning: Includes creating a roadmap by assessing applications, dependencies, and environment constraints.
  • Development: Migrates applications using migration tools to minimize errors.
  • Verification: Compares original and migrated workflow outputs.
  • Execution: Deploys workloads once validation is complete.

Proven success

One BMC customer achieving success with such a migration is AG Insurance. To maintain its market leadership, the company is focused on customer and competitive differentiation, adding products and experiences to make it the best choice for customers, distributors, and brokers. The organization has embarked on an ambitious and complex replatforming modernization project to migrate from its mainframe to Windows servers.

Control-M has been integral to this transformation. As part of the replatforming project, AG Insurance migrated more than 80 million lines of code through Micro Focus application modernization solutions. To minimize risk and facilitate planning and implementation, the migration was accomplished through several sequential iterations, each with its own testing and validation cycles. During the iterative application migration process, Control-M was essential to the testing of parallel workflows, including migrated and non-migrated applications across the mainframe and the new distributed platform, and verifying that the business results they produced were identical.

Control-M continues to be the strategic orchestration framework driving all of AG Insurance’s applications and data workflows and enabling new possibilities.

Contemplating mainframe migration?

If you are one of the many mainframe-driven enterprises contemplating or actively pursuing a modernization program, Control‑M and Micro Focus Enterprise Server offer a compelling, integrated capability to manage your journey. Together, the solutions can help you adapt workflows and services with operational transparency so your organization can achieve a seamless transition from the mainframe while preserving your vital institutional knowledge and experience.

The migration can be approached through a managed methodology that leverages both Control-M’s expertise in migrating mainframe application workflows and Micro Focus’ expertise in migrating mainframe applications to mitigate risks and allow customers to migrate at their own pace.

For more information on how Control-M and Micro Focus Enterprise Server can advance your mainframe modernization initiative, download this whitepaper.

]]>
Streamlining Machine Learning Workflows with Control-M and Amazon SageMaker https://www.bmc.com/blogs/ml-workflows-controlm-sagemaker/ Fri, 10 Nov 2023 07:41:26 +0000 https://www.bmc.com/blogs/?p=53284 In today’s fast-paced digital landscape, the ability to harness the power of artificial intelligence (AI) and machine learning (ML) is crucial for businesses aiming to gain a competitive edge. Amazon SageMaker is a game-changing ML platform that empowers businesses and data scientists to seamlessly navigate the development of complex AI models. One of its standout […]]]>

In today’s fast-paced digital landscape, the ability to harness the power of artificial intelligence (AI) and machine learning (ML) is crucial for businesses aiming to gain a competitive edge. Amazon SageMaker is a game-changing ML platform that empowers businesses and data scientists to seamlessly navigate the development of complex AI models. One of its standout features is its end-to-end ML pipeline, which streamlines the entire process from data preparation to model deployment. Amazon SageMaker’s integrated Jupyter Notebook platform enables collaborative and interactive model development, while its data labeling service simplifies the often-labor-intensive task of data annotation.

It also boasts an extensive library of pre-built algorithms and deep learning frameworks, making it accessible to both newcomers and experienced ML practitioners. Amazon SageMaker’s managed training and inference capabilities provide the scalability and elasticity needed for real-world AI deployments. Moreover, its automatic model tuning, and robust monitoring tools enhance the efficiency and reliability of AI models, ensuring they remain accurate and up-to-date over time. Overall, Amazon SageMaker offers a comprehensive, scalable, and user-friendly ML environment, making it a top choice for organizations looking to leverage the potential of AI.

Bringing Amazon SageMaker and Control-M together

Amazon SageMaker simplifies the entire ML workflow, making it accessible to a broader range of users, including data scientists and developers. It provides a unified platform for building, training, and deploying ML models. However, to truly harness the power of Amazon SageMaker, businesses often require the ability to orchestrate and automate ML workflows and integrate them seamlessly with other business processes. This is where Control-M from BMC comes into play.

Control-M is a versatile application and data workflow orchestration platform that allows organizations to automate, monitor, and manage their data and AI-related processes efficiently. It can seamlessly integrate with SageMaker to create a bridge between AI modeling and deployment and business operations.

In this blog, we’ll explore the seamless integration between Amazon SageMaker and Control-M and the transformative impact it can have on businesses.

Amazon SageMaker empowers data scientists and developers to create, train, and deploy ML models across various environments—on-premises, in the cloud, or on edge devices. An end-to-end data pipeline will include more than just Amazon SageMaker’s AI and ML functionality, where data gets ingested from multiple sources, transformed, aggregated etc., before training a model and executing AI/ML pipelines with Amazon SageMaker. Control-M is often used for automating and orchestrating end-to-end data pipelines. A good example of end-to-end orchestration is covered in the blog, “Orchestrating a Predictive Maintenance Data Pipeline,” co-authored by Amazon Web Services (AWS) and BMC.

Here, we will specifically focus on integrating Amazon SageMaker with Control-M. When you have Amazon SageMaker jobs embedded in your data pipeline or complex workflow orchestrated by Control-M, you can harness the capabilities of Control-M for Amazon SageMaker to efficiently execute an end-to-end data pipeline that it also includes Amazon SageMaker pipelines.

Key capabilities

Control-M for Amazon SageMaker provides:

  • Secure connectivity: Connect to any Amazon SageMaker endpoint securely, eliminating the need to provide authentication details explicitly
  • Unified scheduling: Integrate Amazon SageMaker jobs seamlessly with other Control-M jobs within a single scheduling environment, streamlining your workflow management
  • Pipeline execution: Execute Amazon SageMaker pipelines effortlessly, ensuring that your ML workflows run smoothly
  • Monitoring and SLA management: Keep a close eye on the status, results, and output of Amazon SageMaker jobs within the Control-M Monitoring domain and attach service level agreement (SLA) jobs to your Amazon SageMaker jobs for precise control
  • Advanced capabilities: Leverage all Control-M capabilities, including advanced scheduling criteria, complex dependencies, resource pools, lock resources, and variables to orchestrate your ML workflows effectively
  • Parallel execution: Run up to 50 Amazon SageMaker jobs simultaneously per agent, allowing for efficient job execution at scale

Control-M for Amazon SageMaker compatibility

Before diving into how to set up Control-M for Amazon SageMaker, it’s essential to ensure that your environment meets the compatibility requirements:

  • Control-M/EM: version 9.0.20.200 or higher
  • Control-M/Agent: version 9.0.20.200 or higher
  • Control-M Application Integrator: version 9.0.20.200 or higher
  • Control-M Web: version 9.0.20.200 or higher
  • Control-M Automation API: version 9.0.20.250 or higher

Please ensure you have the required installation files for each prerequisite available.

A real-world example:

The Abalone Dataset, sourced from the UCI Machine Learning Repository, has been frequently used in ML examples and tutorials to predict the age of abalones based on various attributes such as size, weight, and gender. The age of abalones is usually determined through a physical examination of their shells, which can be both tedious and intrusive. However, with ML, we can predict the age with considerable accuracy without resorting to physical examinations.

For this exercise, we used the Abalone tutorial provided by AWS. This tutorial efficiently walks users through the stages of data preprocessing, training, and model evaluation using Amazon SageMaker.

After understanding the tutorial’s nuances, we trained the Amazon SageMaker model with the Abalone Dataset, achieving satisfactory accuracy. Further, we created a comprehensive continuous integration and continuous delivery (CI/CD) pipeline that automates model retraining and endpoint updates. This not only streamlined the model deployment process but also ensured that the Amazon SageMaker endpoint for inference was always up-to-date with the latest trained model.

Setting up Control-M for Amazon SageMaker

Now, let’s walk through how to set up Control-M for Amazon SageMaker, which has three main steps:

  1. Creating a connection profile that Control-M will use to connect to the Amazon SageMaker environment
  2. Defining an Amazon SageMaker job in Control-M that will define what we want to run and monitor within Amazon SageMaker
  3. Executing an Amazon SageMaker pipeline with Control-M

Step 1: Create a connection profile

To begin, you need to define a connection profile for Amazon SageMaker, which contains the necessary parameters for authentication and communication with SageMaker. Two authentication methods are commonly used, depending on your setup.

Example 1: Authentication with AWS access key and secret

Figure 1. Authentication with AWS access key and secret

Figure 1. Authentication with AWS access key and secret.

Example 2: Authentication with AWS IAM role from EC2 instance

Figure 2. Authentication with AWS IAM role

Figure 2. Authentication with AWS IAM role.

Choose the authentication method that aligns with your environment. It is important to specify the Amazon SageMaker job type exactly as shown in the examples above. Please note that Amazon SageMaker is case-sensitive, so make sure to use the correct capitalization.

Step 2: Define an Amazon SageMaker job

Once you’ve set up the connection profile, you can define an Amazon SageMaker job type within Control-M, which type enables you to execute Amazon SageMaker pipelines effectively.

Figure 3. Example AWS SageMaker job definition

Figure 3. Example AWS SageMaker job definition.

In this example, we’ve defined an Amazon SageMaker job, specifying the connection profile to be used (“AWS-SAGEMAKER”). You can configure additional parameters such as the pipeline name, idempotency token, parameters to pass to the job, retry settings, and more. For a detailed understanding and code snippets, please refer to the BMC official documentation for Amazon SageMaker.

Step 3: Executing the Amazon SageMaker pipeline with Control-M

It’s essential to note that the pipeline name and endpoint are mandatory JSON objects within the pipeline configuration. By executing the “ctm run” command on the pipeline.json file, it activates the pipeline’s execution within AWS.

First, we run “ctm build sagemakerjob.json” to validate our JSON configuration and then the “ctm run sagemakerjob.json” command to execute the pipeline.

Figure 4. Launching Amazon SageMaker job

Figure 4. Launching Amazon SageMaker job.

As seen in the screenshot above the “ctm run” command has launched the Amazon SageMaker job. The next screenshot shows the pipeline running from the Amazon SageMaker console.

Figure 5. View of data pipeline running in Amazon SageMaker console.

Figure 5. View of data pipeline running in Amazon SageMaker console.

In the Control-M monitoring domain, users have the ability to view job outputs. This allows for easy tracking of pipeline statuses and provides insights for troubleshooting any job failures.

Figure 6. View of Amazon SageMaker job output from Control-M Monitoring domain.

Figure 6. View of Amazon SageMaker job output from Control-M Monitoring domain.

Summary

In this blog, we demonstrated how to integrate Control-M with Amazon SageMaker to unlock the full potential of AWS ML services, orchestrating them effortlessly into your existing application and data workflows. This fusion not only eases the management of ML jobs but also optimizes your overall automation processes.

Stay tuned for more blogs on Control-M and BMC Helix Control-M integrations! To learn more about Control-M integrations, visit our website.

]]>
Unlock the power of SAP® Financial Close with Control-M https://www.bmc.com/blogs/unlock-sap-financial-close-controlm/ Thu, 19 Oct 2023 05:06:44 +0000 https://www.bmc.com/blogs/?p=53243 Executive summary SAP® is a complex system with many integrations and modules for thousands of time-sensitive financial closing activities that must sync with each other so the final general ledger can be balanced. All modules and sub-modules in finance need to interact with each other in a time-dependent fashion to successfully close any outstanding and […]]]>

Executive summary

SAP® is a complex system with many integrations and modules for thousands of time-sensitive financial closing activities that must sync with each other so the final general ledger can be balanced. All modules and sub-modules in finance need to interact with each other in a time-dependent fashion to successfully close any outstanding and open items. Collecting closing documents from various stakeholders across the organization can create major challenges for the business to successfully close its financial books and enter a new fiscal month/year. As a result, accounting is often behind in closing previous months, and the books are rarely up to date and balanced. Both issues create financial uncertainty for the organization.

Organizations need to have a successful month-end, quarter-end, and year-end close where all their carry forwards are moved into the next fiscal year and the general ledger (GL) and sub-ledgers can be closed and balance the trial balance. This allows companies to have strong cash flow, liquidity, and reduced total cost of ownership (TCO).

Financial closing is a jumble of task types, closing types (monthly, yearly, quarterly), cost centers/profit centers, time-dependent variables, custom factory calendars, and custom programs and transactions. Efficient financial closing processes are crucial for decision-making, financial transparency, and maintaining the trust of stakeholders, including investors, regulators, and the public.

Common challenges with the financial close process

1. Lack of enterprise visibility

If all tasks are not completed by the end of a given period on time, or there are still some GL postings not yet completed, it is very likely that the GL will not be balanced on time. Bad data being passed to another group leads to worse data. Access to real-time insights and visibility is critical, but not always common.

2. Data accuracy and reconciliation

Reconciling accounts, validating transactions, and resolving discrepancies can be time-consuming and complex, particularly in large organizations with numerous transactions and accounts. As business units are balancing their sub-ledgers, while waiting on the dependencies from within their own group, other teams may be waiting on them. Just a single error can lead to inaccurate data and an unbalanced GL. The manual effort to resolve this can be overwhelming.

3. Time sensitivity

Financial closes often have strict deadlines, especially for quarterly and annual reporting. Meeting these deadlines can be challenging, especially if there are delays in data gathering, reconciliation, or approval processes.

Missing a financial close, especially a critical one like a quarterly or annual close, can have significant consequences for enterprise organizations, such as being out of compliance with international accounting standards, tax laws, and industry-specific standards, or risking a possible audit. The most immediate consequence is a delay in financial reporting. This can erode trust among stakeholders, including investors, creditors, and regulators, who rely on timely and accurate financial statements for decision-making. Many organizations are legally obligated to file financial reports within specific deadlines. Failure to meet these deadlines can lead to fines, penalties, or legal actions by regulatory authorities. Another consequence is the potential negative impact to the stock price if investors become concerned that the company is in trouble. The list goes on and on.

Benefits of integrating BMC Helix Control-M and Control-M into your SAP finance system

Control-M for SAP® creates and manages SAP ECC, SAP S/4HANA®, SAP Business Warehouse (BW), and data archiving jobs, and supports any applications in the SAP ecosystem, eliminating time, complexity, and any specialized knowledge requirements, while also securely managing the dependencies and silos between SAP and non-SAP systems.

Control-M can speed up even the most complex closing cycles while meeting regulatory requirements and financial reporting standards, allowing you to track closing processes at every stage, including manual steps, transactions, programs, jobs, workflows, and remote tasks.

The step-by-step activities of a financial closing

Figure 1. The step-by-step activities of a financial closing.

Plan all your tasks with Control-M job planning and scheduling for better visibility

The jobs and tasks that affect all SAP modules relevant to a financial close can be grouped together in the planning feature of Control-M. This provides full enterprise visibility into the jobs and tasks that will be executed and helps reduce silos.

Control-M provides answers to your most common questions, including:

  • Where is my job running?
  • In which system?
  • In which cost center?

Further, Control-M can also provide:

  • Intelligent, predictive service level agreement (SLA) management for all business processes and jobs.
  • Resolution and better visibility for cross-application and cross-platform workload challenges.

Pre-carry forward and post-carry forward activities with the Control-M job dependency feature

Using Control-M, all pre- and post-carry forward activities can be put in their respective buckets and all-time dependencies can be defined. Once all pre-carry forward tasks are completed and all steps and jobs in those jobs have been successfully completed, you can move on to the next step. Conversely, if jobs have failed, then the alert and notification feature of Control-M can notify the job owners. When designing jobs, you can define a workflow for what to do if jobs fail, which determines whether the other subsequent job run should continue or the process should be stopped.

Effective control and measures put a temporary hold on certain financial postings processes

There are many scenarios during financial closing where manual adjustments are required. If the need arises to stop a certain financial closing job or put a temporary hold on a job for a manual adjustment, Control-M offers dynamic workload management to stop or start a process or job, pause subsequent jobs, and flexibly restart from the point of failure to prevent incorrect month-end postings.

Non-SAP postings with Control-M Managed File Transfer

Control-M can orchestrate all SAP jobs, as well as fully automate, schedule, and monitor all jobs coming from non-SAP systems. For example, there is a lot of data coming from investments, bank reconciliation files, and other open balances from sources that are not in SAP. All financial postings coming from non-SAP systems need to be consolidated within SAP Single Responsibility Principle (SRP). Control-M Managed File Transfer can be utilized to bring all manual postings from non-SAP systems into the SAP enterprise resource planning (ERP system of record) for all final closing and postings via file transfer protocol.

The solution also helps you reduce risk and deliver business services faster by automating internal and external file transfers in a single view with related application workflows in hybrid environments. With Control-M Managed File Transfer, you can schedule and manage your file transfers securely and efficiently with Federal Information Processing Standards (FIPS) compliance and policy-driven processing rules. Additionally, Control-M reduces file transfer point product risks and provides a 360-degree view, customizable dashboards, and advanced analytics.

Reduce audits risk with Control-M

Control-M provides complete visibility to the financial close cycle and effectively monitors all tasks and financial postings from inside and outside SAP, providing audits to enable timely remediation, and ultimately reducing the likelihood of an external financial audit. All jobs and tasks are transparent and contain logs. Only users with correct roles and authorizations can execute jobs. If auditors want to audit certain postings, they can see the logs and job output.

Workflow insights with Control-M

Control-M Workflow Insights provides valuable dashboards that give users in-depth observability to continuously monitor and improve the performance of the application and data workflows that power critical business services. Users get easy-to-understand dashboards with insights into the trends that emerge from continuous workflow changes to help reduce the risk of adversely impacting business services. With Control-M Workflow Insights, one can see the trends and any bottleneck in previous financial closings and plan better for the next fiscal close.

Control-M Workflow Insights also helps organizations:

  • Manage financial closing KPI tracking and performance, ensuring continuous improvement to financial closing workflow health and capabilities.
  • Improve forecasting of future infrastructure and capacity needs.
  • Understand critical SLA service duration and effects on the business during the financial close period.
  • Find workflow anomalies that could impact Control-M performance and workflow efficiency.

Conclusion

There are many benefits to integrating Control-M into your SAP finance close process. All jobs are effectively monitored within sub-modules and the process flow, and any hindrance can be escalated to the appropriate personnel. All manual steps below checking the logs, and job monitoring SM37, can be effectively automated bringing better visibility and control to the entire year-end process.

With Control-M for SAP, you’ll get the following benefits:

  • Better visibility at all times
  • The ability to restart failed process
  • The ability to wait on dependent processes can wait and trigger them manually, or apply an event
  • Sped closing cycles, that meet regulatory requirements and financial reporting standards
  • Increased user efficiency through centralized monitoring and control and enhanced automation

Control-M simplifies workflows across hybrid and multi-cloud environments and is available as self-hosted or SaaS. Get the most out of your SAP finance close process by modernizing your orchestration platform with Control-M, an SAP Certified Partner for Integration with RISE with SAP S/4HANA Cloud.

To learn more about Control-M for SAP, visit our website.

SAP, ABAP, SAP S/4HANA are the trademark(s) or registered trademark(s) of SAP SE or its affiliates in Germany and in several other countries.

]]>