Abstract — This article examines challenges and offers solutions in the area of ABAP programming, which serves as the central programming language in the SAP environment. The first part explains the most important aspects around ABAP, including its role in controlling and extending business processes in organizations and the variety of ABAP objects. Then, typical challenges in dealing with ABAP coding in large companies are described, such as non-transparent dependencies, historically grown code structures, lack of modularity, as well as the risk of unplanned adaptations. Finally, special emphasis is placed on the deployment of automated add-on tools that increase transparency and efficiency in ABAP development. A use case shows how these tools help to analyze and document code structures and dependencies more quickly and precisely.


1. Understanding ABAP Coding: Definition, Classification and Associated Objects

ABAP (Advanced Business Application Programming) is a programming language developed by SAP that is mainly used for the development of applications in the SAP environment. With SAP being the world’s leading company for business software, ABAP is one of the main languages when it comes to controlling and expanding business processes within applications.

Although ABAP was originally procedural, today it also supports object-oriented programming, similar to Java or C++. This makes it easier to implement modern software development principles.

In a large company, the number of ABAP objects (programs, classes, function modules, reports, modules, etc.) can run into tens of thousands. This includes:

  • Reports: Programs that are used for retrieving, processing and displaying data (more information can be found here)
  • Function Modules: Reusable modules in ABAP that encapsulate specific tasks and are called in programs (more information here)
  • Forms: User-defined forms such as invoices or delivery bills.

  • Enhancements: User exits, BAdIs (Business Add-Ins) and other mechanisms for customizing the SAP standard.


2. Mastering Challenges in ABAP coding

Development teams in large companies often struggle with non-transparent dependencies due to the abundance of ABAP coding. This results in errors, increased data model complexity, difficult maintainability and risks during system updates. Below, we’ll explain how these dependencies come about in the first place:

Understanding ABAP Coding

a) Avoid Missing or Insufficient Documentation

One of the most common causes of non-transparent dependencies is inadequate or non-existent documentation. In many projects, the focus is placed on the development of functionality, while the documentation aspect is neglected. However, it is essential to record for “posterity” how different programs, modules and data structures are linked together. Without clearly defined requirements, assigned responsibilities and the continuous updating of documentation, it becomes difficult to recognize dependencies, because: Reverse engineering across multiple system types is extremely time-consuming.

b) Managing Historically Evolved Code

SAP systems and their ABAP codes often exist for many years and are continuously adapted and extended. Over time, more and more user-defined functions, quickly implemented solutions, workarounds and enhancements are created that were originally intended to respond to short-term requirements. Old modules/programs continue to be used even though new solutions exist and changes are sometimes made without taking the overall system into account. As a result, these “evolved” structures are no longer clearly traceable, especially if different internal and external developers or teams have worked on the same programs over the years.

c) Consider Re-use and the Lack of Modularity

In ABAP development, global data and functions are often used that are integrated in many different programs. If developers use global classes, database tables or function modules without defining a clean modularization and clear interfaces, close dependencies arise between different parts of the system. Copying code instead of using common modules also leads to this. These dependencies are often difficult to recognize and are not always documented.

d) Avoid Unplanned and Uncoordinated Adjustments

In larger SAP installations, several developers or teams often work on different data structures and functions at the same time. If these adjustments are made without clear coordination or communication, no versioning is available or code review processes are missing, dependencies can arise that those involved are not aware of. These dependencies then remain opaque until they become apparent due to an error or problem.

e) Correctly Regulate the Use of Dynamic and Indirect Calls

In ABAP, it is possible to use dynamic program calls and indirect accesses to implement generic solutions (e.g. dynamic function calls or SELECTs to tables whose names are only determined at runtime). Metadata or tables are also sometimes used at runtime to control program sequences. Such techniques can be useful for developing flexible solutions, but they make the code less comprehensible. Without clear references to the dependent modules, it becomes more difficult to understand which programs or data structures are actually being used.

f)  Mind the Close Connection Between User-defined and Standard SAP Components

“User exits”, “enhancements” or “modifications” to extend the SAP standard are commonplace. These user-defined developments (Z programs, enhancements) are often strongly linked to the SAP standard. When changes are made to the standard (e.g. through an SAP upgrade or a support package), unforeseen dependencies can arise because the programs are closely interlinked in a way that is not transparent. The close links between user-defined enhancements and the SAP standard can be difficult to understand.

g) Do Not Neglect Tests and Quality Controls

If the code is not sufficiently tested or checked, dependencies can be overlooked. Tests, especially unit and integration tests, often uncover hidden dependencies caused by changes in a module or program. If such tests do not take place or are inadequate, these dependencies remain undetected for a long time and changes are put into production with errors. This aspect is therefore due to a lack of quality assurance processes.


3. Tool Support: Automated Transparency in ABAP Coding

As described above, the challenges with ABAP coding are many and varied and involve a lot of effort in terms of documentation, coordination and quality assurance. SAP add-on tools can automate these processes: They document, can quickly search and adapt the code, facilitate testing and offer collaboration functions for both internal and external stakeholders.

A specific use case will help you to visualize the use of such tools in everyday working life:

You are an ABAP developer. For some time now, you have been wondering why the record type for company code 2000 in table ACDOCA is always set to Planned. Unfortunately, you no longer have access to the original developers who implemented these processes because they have long since left the company. You are therefore faced with the challenge of determining the origin of the data in this table without any tips or documentation.

Instead of laboriously combing through the SAP BW backend and manually searching for the relevant ABAP objects, use an SAP add-on tool that can automatically search through metadata and put it into context. One such tool is our “System Scout” software, for example. Using its “ABAP Relations” function, you can analyze relationships between various ABAP objects and the ACDOCA table at the touch of a button. This is how you discover that the Z_UPDATE_ACDOCA program manipulates the ACDOCA table.

1 bluetelligence GmbH How to: Understand ABAP Code and Identify Dependencies Faster
The tool identifies all manipulations by INSERT, MODIFY, UPDATEand DELETE

But you require even more information: It is important for you to know which ABAP objects trigger the data manipulation. Here, too, the tool offers a helpful function: the where-used list. It carries out this analysis for the program Z_UPDATE_ACDOCA and finds out that this program is referenced in the function module Z_FM_ACC:

2 bluetelligence GmbH How to: Understand ABAP Code and Identify Dependencies Faster

Knowing where to look for the logic responsible for the record type plan will save you valuable time if you have a full to-do list and no documentation.

After the automated analysis of the source code, you also find out that a very old logic from 2012 always sets the record type for company code 2000 to planned.

3 bluetelligence GmbH How to: Understand ABAP Code and Identify Dependencies Faster

Incidentally, apart from the use case listed in the classic ABAP environment, the “ABAP Relations” function also supports you in the BW environment. If a lookup scan is carried out in a BW data flow, the identified objects can then also be analyzed with “ABAP Relations”:

4 bluetelligence GmbH How to: Understand ABAP Code and Identify Dependencies Faster

For example, you can see that the table ZSUPPLIER is manipulated by a program – namely Z_UPDATE_SUPPLIER:

5 bluetelligence GmbH How to: Understand ABAP Code and Identify Dependencies Faster

In addition, identified tables of BW objects are displayed directly as BW objects. This ensures a better understanding and enables further interactions and analyses.

6 bluetelligence GmbH How to: Understand ABAP Code and Identify Dependencies Faster

The functions of the System Scouts tool, in particular “ABAP Relations” and the where-used list, offer considerable advantages to ABAP developers who need to quickly understand complex data flows:

  • They provide a quick overview of data manipulations and their origin, save time and increase the accuracy of analyses.

  • By clearly displaying the relationships between different ABAP objects and tables, these functions create transparency and make work considerably easier.

  • In addition, support in the BW environment makes the entire data flow analysis in SAP systems even more efficient and easier to understand.

P.S.: Nobody stays in their job forever – so don’t forget to document ABAP objects and their relationships for posterity. There is also an SAP add-on tool that automates this – the Docu Performer. It ensures that future ABAP developers no longer have to research, but can directly access detailed and up-to-date documentation.

Abstract — Collaboration is a crucial driver of success, especially in complex domains like the Business Intelligence (BI) of a corporation. That’s because collaboration allows pooling the knowledge and skills of employees and work more efficiently together. That way, the BI team can respond to changes more quickly, act more flexibly, and ultimately positively influence their corporation’s outcome and competitiveness.

Now, what requires collaboration in a BI work context, exactly? The everyday tasks and decisions of any BI employee involve charts and reports. Most BI departments use high-end software solutions to make them accessible and offer a deeper insight. Yet, many face low use and acceptance among their teams. And this is where collaboration steps in. Collaborative BI boosts the acceptance of BI software and simultaneously changes how we handle data analysis and decision-making – by promoting teamwork and combining everyone’s knowledge.

1. Understanding Collaborative BI

In order to get a deeper understanding of Collaborative BI, let’s have a look at the ‘old’ way, before this trend: Traditional Business Intelligence follows a centralized, IT-driven model where a specialized team of analysts produces static, historical reports for decision-makers, often leading to extended turnaround times for fresh insights.

Now, on the other hand, Collaborative BI enables a wider array of users throughout the organization to interact with dynamic, real-time data using self-service tools that diminish reliance on IT. This method and its corresponding tools promote improved collaboration through functionalities such as report sharing, commenting, and annotating, while emphasizing both real-time and predictive analytics to facilitate proactive decision-making.

traditional Business Intelligence versus Collaborative Business Intelligence

Traditional BI versus Collaborative BI

 

Key Objectives of Implementing Collaborative BI

The primary aim of Collaborative BI is to enhance problem-solving and decision-making processes. The following aspects are fundamental to achieving this overarching goal:

Decentralized Analysis

By engaging and empowering a diverse range of users with various roles, backgrounds, and skill sets, organizations can tap into a multitude of perspectives and collective intelligence. This approach helps in mitigating bottlenecks that are traditionally linked to centralized teams, thereby expediting the process of problem-solving. Engaging users from diverse departments and backgrounds fosters a rich array of viewpoints and insights, ultimately resulting in more thorough and inventive solutions.

Improved Dashboard & Report Design

Users with diverse roles, backgrounds, and skill levels require customized dashboards and reports that align with their specific needs. By fostering the sharing of ideas and knowledge among these users, organizations can create tailored dashboards and reports that effectively meet the varied requirements of their audience. Moreover, real-time access to data enables users to quickly identify and address issues as they arise. Interactive dashboards and reports allow users to drill down into data, uncovering root causes and patterns more quickly than with static reports.

Collaboration Tools & Services

Collaborative BI tools provide features such as commenting, sharing, and discussion threads facilitate immediate communication and collaboration among team members, allowing for faster consensus and action. Seamless real-time data sharing across the organization ensures that all relevant stakeholders have access to the same information, fostering a unified approach to decision making. Self-service BI tools enable users to generate their own reports and queries without waiting for IT support, accelerating the decision-making process.


2. Challenges in Implementing Collaborative BI

The implementation of Collaborative BI presents a unique set of challenges, which can differ based on the organization’s initial position and current circumstances. Overcoming these challenges will ensure the success of your Collaborative BI implementation.

  • Tool Elasticity
  • Data Privacy, Security and Data Ownership
  • Metadata
  • Data Integration
  • Communication between Employees

shutterstock 1060077944 bluetelligence GmbH What Is Collaborative BI & How Does It Enhance The Efficiency and Acceptance of Your Business Intelligence Solutions?

 

Tool Elasticity

Tool elasticity, meaning the ability of BI tools to scale and adapt to varying user needs and workloads, poses a challenge for implementing collaborative BI as well: Ensuring scalability without performance degradation, integrating with existing systems, managing variable costs, and facilitating user adoption across all skill levels require significant effort. Additionally, data security concerns, especially with cloud-based solutions, performance optimization, maintaining consistent and reliable access, and balancing customization with stability complicate the process. These factors make it difficult for organizations to effectively implement and maintain elastic BI tools for collaborative efforts.

Data Privacy, Security & Data Ownership

Data privacy, security, and data ownership of course pose challenges when implementing collaborative BI: Handling sensitive information, managing authorized usage, ensuring compliance with regulations like GDPR and HIPAA, and managing the increased risk of data breaches is complex and critical. Additionally, implementing robust security measures and secure infrastructure require significant investment and expertise. Continuous user training and awareness programs are essential to minimize human errors that could compromise data security, further complicating the implementation of collaborative BI.

Metadata

Metadata is extremely helpful in the context of collaborative BI because it answers the questions of data origin, usage. In traditional BI, these questions are asked by business departments and answered by IT. In collaborative BI, business users find answers themselves. This, however, presents the challenge of ensuring data is correctly understood by less tech-savvy users and utilized across the organization – e.g. by comprehensive training. Additionally, metadata is only of use for correct analyses when it is maintained up-to-date – this involves a significant effort and constant documentation of data sources, definitions, structures, and usage. Discrepancies in metadata can lead to misinterpretations and inconsistencies, complicating data sharing and collaboration.

Data Integration

Data integration is particularly challenging and crucial for Collaborative BI: It involves consolidating different data sources with varying formats, structures, and quality levels into a unified system that all users can access and analyze. It is essential for enabling real-time, collaborative decision-making, but it requires sophisticated tools and processes for data extraction, transformation, and loading (ETL). Effective data integration also necessitates collaboration between IT and business units to align on data definitions and standards, a challenging but essential task to ensure that all users are working with the same accurate and consistent data.

Communication between Employees

Communication between employees is the heart of the whole matter of Collaborative BI – and it is a challenge itself: Due to the varying levels of (technical and business) expertise and understanding of data, differences in language, priorities, and perspectives, misunderstandings are bound to occur. They can lead to incorrect data interpretations, flawed analyses, and poor decision-making. Additionally, coordinating across departments and ensuring that everyone is aligned on BI objectives, processes, and tools necessitates continuous effort. Implementing these channels and fostering a culture of open communication requires ongoing commitment from leadership to break down silos and encourage active participation from all employees.


3. Recommendations for Improving Collaboration in Your Existing BI Landscape

Collaborative BI may pose its challenges, but with the following recommendations, you will eventually overcome and even outweigh them with its striking benefits:

  • Self-Service & Data Visualization
  • Data Quality & Data Governance
  • Metadata Management & Data Cataloging
  • Culture & Communication

for improving collaboration in your existing BI landscape

Self-Service & Data Visualization

Self-service and data visualization are key when it comes to Collaborative BI – both aspects take the weight of the IT departments’ shoulders and make data accessible and understandable to all departments. They materialize in the form of

  • intuitive, user-friendly tools that empower employees of all skill levels…
  • …to access, analyze, and visualize data independently, fostering a data-driven culture across the organization
  • comprehensive training to ensure the usage and efficiency of these tools
  • Enhancing data visualization with customizability allows users to tailor dashboards to their specific needs and easily share findings with colleagues
  • Ensuring robust data governance and real-time data access will further enhance the reliability and relevance of the insights generated.

Encouraging feedback and continuous improvement of these tools based on user experience helps to keep them aligned with the evolving needs of the organization.

Data Quality & Data Governance

A second big necessity in the process of introducing Collaborative BI is improving data quality. It can be achieved by implementing strong data governance practices:

Establishing

  • standardized data entry protocols,
  • regular data cleaning
  • validation processes
  • and clear data ownership that ensures accountability among all stakeholders

is essential to maintain high-quality data.

Advanced data management tools even automate error detection and correction and can significantly reduce inconsistencies. Ultimately, the culture of transparency with Collaborative BI fosters an open communication about data issues and collective efforts to resolve them, even if done manually.

Metadata Management & Data Cataloging

Finally, metadata management and data cataloging are an essential aspect to facilitate Collaborative BI. Ideally, you can even combine the two of them: Via APIs or dedicated metadata repositories, it is possible to include SAP or Power BI metadata into or next to your Data Catalog.

When implementing a data catalog, make sure that it

  • serves as a centralized access to every BI employee (single point of truth)
  • provides an intuitive interface and displays data in straight-forward way, so that users with varying backgrounds are able to comprehend the information
  • includes metadata like the usage, source and lineage of data in order to efficiently answer questions that arise in the context of reporting
  • displays up-to-date metadata in order to make the single point of truth really true – and thus, boost the usage of the Data Catalog

Continuous training and support for employees on the importance and use of metadata further enhance their ability to contribute to and benefit from the collaborative BI efforts, ultimately leading to more informed and effective decision-making.

Culture & Communication

Last, but not least, people make a company. In order to foster the new collaborative culture, management should:

  • prioritize transparency and actively encourage the sharing of information and insights across all levels of the organization.
  • Implement regular training sessions and workshops to enhance data literacy, ensuring all employees feel confident in their ability to contribute to BI initiatives
  • Recognize and reward teamwork and collective problem-solving
  • additionally, the physical aspect of creating dedicated collaboration spaces will streamline communication and data sharing, making it easier for teams to work together effectively

4. Collaborative BI Tools: Data Catalog meets Metadata Repository

As described above, user-friendly Data Catalogs and Metadata Repositories are two crucial tools to enhance Collaborative BI in your company. As a BI software development company of 16 years, bluetelligence has developed a combination of both: Our Data Catalog “Enterprise Glossary” includes business information as well as automatically synchronized metadata of all connected SAP and Power BI systems. It checks all the boxes of driving Collaborative BI by

  • providing a central access to all key figures and reports in the company
  • including information for all knowledge levels: business definitions as well as technical metadata (data source, data lineage, related key figures, etc) in an understandable way
  • offering a user-friendly search and intuitive interface
  • automatically syncing all connected SAP & Power BI systems for up-to-date information
  • providing communication features for remarks and discussions
  • being able to use standard templates or customize it to your needs entirely


Glossary Entry Data Catalog

Glossary Entry Data Catalog

image 1 bluetelligence GmbH What Is Collaborative BI & How Does It Enhance The Efficiency and Acceptance of Your Business Intelligence Solutions?

Data Lineage in the Data Catalog


 

Overall, bluetelligence empowers your company to leverage metadata more effectively, driving innovation and improving business outcomes through enhanced collaboration. Read more about our data catalog, the Enterprise Glossary, on www.enterprise-glossary.de/en.

Should you already utilize a Data Catalog but are looking to include SAP or Power BI metadata, our API serves this purpose exactly. In this case, head to www.bluetelligence.de/en/metadata-api.

Abstract — As software developers in the area of SAP Business & Analytics, we repeatedly encounter “time wasters”, i.e. everyday Business Intelligence processes that could be approached much more effiecient. This article deals with the dependency between business departments that work with dashboards and reports and IT, which in turn processes tickets when errors occur. As a solution, it discusses the BI Self Service concept, which can help to speed up these processes and thus save costs, time and nerves. Specifically, it involves utilizing data cataloging to provide business departments with insights into the metadata of their key figures and reports – thus relieving the burden on IT and making business processes more efficient.

The Use Case: Usually an Error in a Dashboard

We all know how it goes: Business dashboards are supposed to provide clarity – but if they don’t display the correct values or show error messages, the opposite is of course the case. This is particularly bad if the department notices the error shortly before a meeting in which the dashboard is needed.

The error often stems from one of the key figures used in the SAP system – especially if the key figure is made up of several other key figures in the system. We call this a ‘nested key figure’. Another term that is often used is the ‘calculated key figure’.

BI Self-Service bei SAP-Dashboards

Calculated Key Figures Have Their Pitfalls

The key figure ‘Expected Incoming Orders’ in a sales dashboard can, for example, be made up of four to six other, equally nested key figures – for example, the sales probability, the open offers, and so on.

Finding out whether there is an error in one of the many key figures in the SAP system takes a hell of a lot of time.

What exactly is taking so long? As long as the department can only detect the error in the dashboard, but cannot see which other underlying key figures the incorrectly displayed key figure contains, it can only submit an unspecific support ticket to IT. The IT then have to search for errors and investigate the entire background of the key figure in the SAP system. And since IT is known to be swamped with tickets, the problem will not be solved in time before the meeting (or the next one).

As promised, this article is not only about problems, but also about solutions – and the solution in this case is BI Self-Service – more precisely, a Data Catalog. And a driver tree. Let us explain, why.

The Solution: SAP Metadata in Any Business Department's Data Catalog

In order to provide business departments with more empowerment with their dashboards and relieve IT of work in equal measure, it is advisable to use a data catalog that also provides an overview of SAP – the prerequisite, however, is that the information displayed is prepared in a way that is understandable for business departments.

Our Data Catalog, the Enterprise Glossary creates glossary entries for each key figure of synchronized SAP systems, in which both the technical definition and its calculation with all involved key figures are mapped. With the latest Enterprise Glossary function, the “driver tree”, the formula is even displayed graphically in a network graphic, providing an easy-to-understand overview of all levels of the nested key figure (see GIF).

This means that specialist departments without access to SAP backend can immediately see which key figures are involved in the dashboard. Since they can comprehend the mathematical calculation of the key figure, they will most likely already be able to tell IT the specific key figure that is displayed incorrectly in SAP. IT then will be able to rectify the problem in the backend in a much more targeted manner – and much more quickly. The Data Catalog thus acts as a link between business departments and IT – and makes life a little easier for everyone. Of course, the function is also useful in everyday life to understand how certain values in dashboards come about in the first place.

Conclusion: The use of a data catalog with a real-time connection to the SAP systems creates a self-service point that enables more efficient collaboration between specialist departments and IT – be it when searching for errors in dashboards, defining key figures or answering questions about existing reports.

Plastic waste in the world’s oceans is a huge problem. With his Ocean Cleanup Project, the young Dutchman Boyan Slat has set himself the task of actively tackling this problem. We are impressed by his determination and innovative strength and have been supporting the project since last year. So it’s high time we told you about it.

Five trillion pieces of plastic are currently floating in the ocean. However this figure can be quantified, it sounds frightening. In the long term, the pieces of plastic break down into microplastics and cause fatal damage to our ecosystem. In addition to marine life, humans are also affected along the food chain.

Boyan Slat Ocean Cleanup Project

The question remains as to what to do about it. Collecting and transporting each piece individually would be neither affordable nor time-consuming. For many, however, simply giving up on the oceans is – fortunately – not an option.

Dutchman Boyan Slat has decided not to give in helplessly and take action instead. He made this decision at the age of just 17. Two years later, in 2013, he founded the Ocean Cleanup Project. With a team of up to 100 researchers and engineers, he has continued to tinker and develop an ingenious system.

The targeted technology is essentially based on plastic tubes arranged in a U-shape. Held to the seabed by weights, these tubes float on the sea surface and bundle the waste in the middle with the help of natural ocean currents, where it can be skimmed off after a while. The sea collects its own waste, so to speak.

You are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

After the first prototype was successfully deployed, the first major marine clean-up mission with “System 001” – affectionately known as “Wilson” – was launched in 2018. It began with the Great Pacific Garbage Patch between Hawaii and California, the largest of the five large plastic waste fields. It is estimated to be 1.6 million square kilometers in size. The goal: to eliminate 50 percent of the Great Pacific Garbage Patch in just five years. To this end, continuous improvements have been made to the system since the start of the mission in order to make it more effective.

When we at bluetelligence first heard about the Ocean Cleanup Project, we were impressed by Boyan Slat’s tenacity, his passion and not least his visionary solution. It was therefore an easy decision to donate €5,000 to the Ocean Cleanup Project in spring 2018. Since then, we have also continued to support the project with €10 per employee per month.

Boyan Slat acts sustainably and wants to leave the world better than he found it. We at bluetelligence can identify with this. Long-term, resource-saving solutions and persistence in implementing visions are also important to us in product development. In addition, just like the well-known African proverb, we believe that many small people in many small places doing many small things can change the face of the world. This starts, for example, with the switch to glass bottles in the office kitchen and continues with electric company cars and donations to great projects like this one. Of course, we continue to be inspired in this respect and hope to inspire others to contribute to the preservation of our environment.

If you would also like to donate, you can do so directly on the Ocean Cleanup Project website. Please contact us if you have any questions.