Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
gerd_schoeffl
Product and Topic Expert
Product and Topic Expert
4,094

The Need for Upload Functionality

Planning is often done in a heterogeneous landscape – using different tools, products, landscapes etc.  Thus, an exchange of data between systems is essential and not all these data loads can or should be done by the administrator. Many of our customers have been voicing the need for a simple way for the end users to be able to easily upload plan data from csv or Excel files.

Established Work Arounds

SAC has been offering a file upload with the file import in the data management area of the model. But what the customers were asking for was a simpler solution for the end user that can be triggered directly from the story.

Thus, we will introduce a new end user friendly upload of fact data from a csv or Excel files with QRC3 2024.

As the need for an end user driven upload functionality has been quite large there are several blogs in this community describing solutions for a file upload based on custom widgets (and calling the Data Import Service to post the data to the SAC system), in particular https://community.sap.com/t5/technology-blogs-by-sap/how-to-upload-data-to-a-public-or-private-plann...

With the new standard solution customers we can now replace those custom solutions (which are not maintained by SAP) with a proper solution in the product. We strongly recommend our customers to migrate to the standard solution as SAP is not going to provide any further enhancements/corrections to the custom widget mentioned in the blog above (which has the state of ‘sample coding’ anyway) from now on.

Details of the New Solution

Let us have a look at how the new file upload works. We have three main personas involved in the upload:

  • Modeler: uses a template file to create the re-useable upload job, defines job parameters and mappings from file columns to system fields, defines wrangling (if necessary),
  • Story designer: creates the file upload starter (previously called trigger) in the story using the predefined upload job, defines version behavior and publish behavior,
  • End User: chooses upload file (Excel or csv) and version (if applicable).

Let us have a closer look at the functionality now.

Defining the upload job

We have introduced a new entry in the data management area of the model – the upload job. This is the job type that is used for the new file upload. Please note that the new file upload is only available for new models but not for "classical" account models.

Upload 1.jpg

To create a new upload job, the modeler must upload a sample file. This file does not need to (but can) contain real data. The sample file must have the same structure as the upload file the end user will use later.

The upload job uses the same wrangling screen as the import jobs. Various transformations can be defined, for the uploaded data, and it is also possible to un-pivot the file. Thus, it is possible to upload excel/csv files that use a drill of one dimension (typically the time) in the columns. Please note that the column headers must be the same when defining and when executing the file upload and that the number of columns must be identical.

Upload 2.jpg

In the mapping screen the columns of the file are mapped to the system dimensions. Instead of mapping each target dimension selected dimensions can also be set to fixed (single) values. If a measure is not mapped during this stage, then the existing measure values in the system will not be changed. Thus, it is possible to upload only selected measures.

Upload 3.jpg

The version needs to be mapped to a single value and cannot be filled from the file. The reason for this is that currently the system will always upload to a single private or public edit version. The system does not upload directly into public versions – a publish step is always necessary. This publish step can either be done by the end user or can be triggered automatically.

The version set in the mapping dialog is only a DEFAULT version. This default can be overruled by the story designer or/and by the end user (depending on the settings in the trigger).

In addition, the modeler can decide in the general settings whether the upload should append or overwrite the data or whether a sign flip should be performed.

upload 4.jpg

Please keep in mind that the upload functionality at this stage is for transaction data only. At a later stage we also want to introduce an upload for master data.

Using the upload job in the story

The story designer can now use the upload job in a data upload starter. The publish behavior (no publish or automatic publish) can be defined in the data upload starter just as in the data action starter.

The story designer can also define the version handling – whether the end user should be able to change the version or not, and whether the default defined in the job, a specific member (option member selector), or a member picked from story filter or an input control should be used.

upload 5.jpg

In addition, the story designer can use two scripts that are executed before and after the file upload.

upload 6.jpg

Please be aware that the scripts are always executed before an upload action is triggered and after the upload action has been executed - no matter what happened during the executions. So even if the user cancels the upload the onAfterExecution script will be triggered as well. If you only want to execute the script if the upload has been run (successfully) a little if statement in your script can handle this - here is some sample coding:

```

if (status === DataUploadExecutionResponseStatus.Canceled) {

    // Upload canceled, need to clean up or show info?

    Application.showMessage(ApplicationMessageType.Info, "Data Upload Canceled");

    return;

} else if (status === DataUploadExecutionResponseStatus.Error) {

    // Upload encountered an error, perform error handling?

    return;

} else {

    // Case for DataUploadExecutionResponseStatus.Success or DataUploadExecutionResponseStatus.Warning

    // Perform some subsequent actions here after Success (Warning is a kind of Success)

}

```

 

Performing the upload

Now the end user can use the upload in the story. When pressing the button, the user is prompted for the file name and – depending on the settings in the starter – the version.

upload 7.jpg

The user will be informed by a toast message about the success of the upload. If certain records have been rejected the user will be informed and there is the possibility to download those rejected records. In addition, the user can check the success of the file upload in the notifications.

upload 8.jpg

There are various reasons why a record can be rejected during an upload. Among technical reasons (like wrong data format) the system is checking against the master data, authorizations, data locking, and validations defined in the system.

Please note that during an upload all correct records will be posted to the selected version. Currently there is no ‘all or nothing’ behavior that rejects an upload if there is at least one erroneous record. If such a behavior is desired the end user can revert the data upload in the history once there are rejected records.

In addition to the user also the modeler can check the performed uploads. In the data management tab there is the data timeline that shows the latest executions of all job types. The summary of the failed records can also be downloaded from there.

upload 9.jpg

 

21 Comments