메뉴 건너뛰기

SAP 한국 커뮤니티

ABAP TUNNING

sapjoy 2012.03.13 00:22 조회 수 : 234786

SAP-ABAP Performance Tuning Guidelines


Table of Contents
1 Processing Internal Tables. 5
1.1. Sequential Reads. 5
1.2. Direct Reads. 5
1.3. Sorting. 5
1.4. Adding Entries after Sorting. 6
1.5. Counting Entries. 6
1.6. Reading large internal tables. 6
1.7. Moving data from internal table 1 to internal table 2. 6
1.8. Appending data from internal table 1 to internal table 2. 7
1.9. Looping at internal tables. 7
1.10. Deleting data from internal tables. 7
2. Processing Database Tables. 7
2.1. Checking SELECT-OPTIONS when reading via a Logical Database. 7
2.2. SELECT SINGLE vs. SELECT * 8
2.3. Selecting via non-key fields or non-key partial fields. 8
2.4. SELECT , ….. 9
2.5. SELECT … UP TO n ROWS. 9
2.6. Duplicate (Master) Data Accesses. 9
2.7. SELECT INTO TABLE. 10
2.8. SELECT FROM … ORDER BY. 10
2.9. Aggregate Functions. 10
2.10. SELECT without WHERE. 10
2.11. UPDATE. 10
2.12. DELETE. 11
2.13. COMMIT. 11
2.14. Client Specific Queries. 11
2.15. Joining Tables. 12
2.16. Maximum Value for a Field - An Optimized Approach. 12
2.17. Safe SQL Check. 13
3. Miscellaneous. 15
3.1. Determining the length of a string. 16
3.2. ASSIGN statement 16
3.3. Testing for Initial Value. 16
3.4. Testing one field for multiple values. 16
3.5. Optimizing IF and CASE structures. 17
3.6. Subroutine / Function Performance. 17
3.7. Performing Calculations. 18
3.8. Checking Return Codes. 18
3.9. Sequential File Processing. 18
3.10. Hard-coding Text (messages, report output, etc.) 19
3.11. Function Modules. 19
3.12. Guidelines for Using Transactions SE(NN) 19
3.13. SAP/SQL Performance Tips. 21

1. Processing Internal Tables
FILLING
To load an internal table from a database table where the structure of the internal table is at least as wide as the structure of the database table use:

SELECT * FROM dbtab INTO TABLE itab.
Rather than:
SELECT * FROM dbtab.
MOVE dbtab TO itab.
APPEND itab.
ENDSELECT.

To fill an internal table without creating duplicate entries and add up the Packed, Integer, and Floating Point fields at the same time, use:

COLLECT itab.

Use this for tables for which you expect approximately 50 entries or less. The COLLECT statement scans the table sequentially for a match on all fields that do not have a data type of Packed, Integer, or Floating Point. Hence it is can be resource expensive for larger tables.
1.1. Sequential Reads
If you are reading through an internal table sequentially using the LOOP at itab ... ENDLOOP method, but are only interested in certain entries, use a WHERE clause and specify a condition for selection. Note that the performance of a LOOP AT ... WHERE statement is improved greatly if all fields being compared in the WHERE clause are of the same data type. Therefore, you should try defining the ‘compare’ fields as follows:

DATA: compare_field LIKE itab-field1.
BEGIN OF itab OCCURS 100,
field1(5), field2(5),
END OF itab.

compare_field = .

LOOP AT itab WHERE field1 = compare_field.
...
ENDLOOP.
1.2. Direct Reads
If you are not processing the entire internal table, use:

READ TABLE itab WITH KEY key BINARY SEARCH.
Rather than:
READ TABLE itab WITH KEY key

The second method performs a sequential read from the first record until if finds a match.

The first method performs a binary search to find a matching record, but the table must be sorted first.
1.3. Sorting
Wherever possible, identify the fields to be sorted. The format:

SORT itab BY field1 field2.

Is more efficient than:

SORT itab.

SORT itab without fields specified attempts to sort the table by all fields other than Packed, Integer, and Floating Point fields.
1.4. Adding Entries after Sorting
To add new entries to a table and keep them in sorted order, use the following method, rather than using the APPEND statement followed by the SORT statement. For a new record the READ statement should fail (SY-SUBRC <> 0), but by using the WITH KEY ... BINARY SEARCH extension, you will be positioned at the location where you want to insert the new record.

READ TABLE INT_TABLE WITH KEY INT_TABLE BINARY SEARCH.
IF SY SUBRC <> 0.
INSERT INT_TABLE INDEX_SY TABIX.
ENDIF.
1.5. Counting Entries
To count up the number of entries in an internal table, use:

DESCRIBE TABLE itab LINES field.

Where field is a work field of type ‘I’ (integer).

Rather than:

LOOP AT itab.
W_COUNT = W_COUNT + 1.
ENDLOOP.
1.6. Reading large internal tables
If you have to retrieve selected records from a large internal table, keep this table sorted.
In this way, you can access the table via the

READ TABLE T_VBAK WITH KEY VBELN = W_VBELN BINARY SEARCH

Statement:
If you only want to verify the existence of a record but don’t need any of the fields from the record then use the addition

TRANSPORTING NO FIELDS

If you only need a few fields from the internal table for processing then use the addition

TRANSPORTING …

In order to determine if an internal table contains any records, always use the DESCRIBE statement.
1.7. Moving data from internal table 1 to internal table 2
If you need to move all entries from one internal table to another one which has the same structure you can simply do it via the following statement:

ITAB2[] = ITAB1[].
1.8. Appending data from internal table 1 to internal table 2
If you need to append records from one internal table to another one which has the same structure you can simply do it via the following statement:

APPEND LINES OF ITAB1 TO ITAB2.
1.9. Looping at internal tables
If you are looping at an internal table just to count the number of records that fulfill certain criteria then use the following variant of the loop statement:

LOOP AT T_VBAK TRANSPORTING NO FIELDS WHERE …
ADD 1 TO COUNTER.
ENDLOOP.

The same applies if you only want to verify that at least one record exists that satisfies a certain condition:

LOOP AT T_VBAK TRANSPORTING NO FIELDS WHERE …
EXIT.
ENDLOOP.

IF SY-SUBRC = 0.
* Record found …
ENDIF
1.10. Deleting data from internal tables
If you need to delete a subset of records from an internal table use the following:

DELETE T_VBAK WHERE …
1.11 Deleting duplicate entries from an internal table
To delete duplicate entries from an internal table the table has to be sorted by the fields used in the comparing condition. If there is no comparing condition the table should be sorted by all fields.

DELETE ADJACENT DUPLICATES FROM T_VBAK [COMPARING field1 field2 …]
2. Processing Database Tables
Reading through a set of related tables

When reading through a set of related tables, use FOR ALL ENTRIES rather than a logical database. The combination of using FOR ALL ENTRIES and using the internal tables are more efficient than a set of nested SELECT ... ENDSELECT statements.
2.1. Checking SELECT-OPTIONS when reading via a Logical Database
When using the GET event keyword to retrieve records via a logical database, selection can be restricted by using the CHECK statement (using either CHECK select-option(s) or CHECK condition).

In this case, a CHECK statement that returns a negative result terminates the processing of GET events for subordinate database tables and the associated data is not read. For efficiency reasons, you should therefore always perform the CHECK at the highest possible level of the database hierarchy.

For example:

SELECT-OPTIONS: S_CNTRY FOR KNA1-LAND1,
S_COY FOR KNB1-BUKRS.
...
GET KNA1.
CHECK S_CNTRY.
GET KNB1.
CHECK S_COY.
...
Is more efficient than:
SELECT-OPTIONS: S_CNTRY FOR KNA1-LAND1,
S_COY FOR KNB1-BUKRS.
...
GET KNB1.
CHECK SELECT-OPTIONS. (or CHECK: S_CNTRY,
... S_COY.)

2.2. SELECT SINGLE vs. SELECT *
SELECT SINGLE * is more efficient than SELECT * … ENDSELECT.

Whenever the full key of a table is known, use:

SELECT SINGLE * FROM dbtab WHERE …. (full key).

Rather than:

SELECT db_field1 db_field2 db_field3 FROM dbtab
WHERE …. (full key).
...
ENDSELECT.

· UP TO 1 ROW can be used to retrieve one record when the full key is not known.
· Whenever the full key is not known you will need to use the SELECT * ... ENDSELECT version of the SELECT statement.

In this case, specifying values for as many of the table’s key fields in a WHERE clause will make the SELECT statement more efficient than checking values after the select.

For example:

SELECT db_field1 db_field2 db_field3 FROM dbtab
WHERE db_field4 EQ field4.
...
ENDSELECT.

Is more efficient than:

SELECT db_field1 db_field2 db_field3 FROM T005T.
CHECK dbtab-db_field4 = field4.
...
ENDSELECT.
2.3. Selecting via non-key fields or non-key partial fields
When selecting records from a database table when only part of a field (on which selection is based) is known, use the LIKE option as part of the WHERE clause:

For example:

SELECT * FROM T001G
WHERE BUKRS EQ ‘US01’
AND TXTKO LIKE ‘__PERS%’.
....
ENDSELECT.

Is more efficient than:

SELECT * FROM T001G
WHERE BUKRS EQ ‘US01’.
CHECK T001G-TXTKO+2(4) = ‘PERS’.
....
ENDSELECT.
2.4. SELECT , …
If you only need a few fields of a table, it is much more efficient to only retrieve exactly those fields from the database than to select all of them (SELECT * …).

Example: if you only need the fields order number, order type and customer from the sales document table, code as follows:

SELECT VBELN AUART KUNNR FROM VBAK
INTO (VBAK-VBELN, VBAK-AUART, VBAK-KUNNR)
WHERE …
WRITE: / …
ENDSELECT.

See the editor help for all the variants of the INTO clause.
2.5. SELECT … UP TO n ROWS
If you only need a certain number of records specify this in your select-statement:

SELECT … FROM … UP TO 10 ROWS.

This is much faster than issuing a SELECT without the UP TO clause and then checking for the system variable SY-DBCNT.
2.6. Duplicate (Master) Data Accesses
Often times master data like customer master, material master etc. is accessed unnecessarily. Example: You read 1000 sales orders from table VBAK. For each sold-to party in table VBAK you want to print out the name. Chances are that several of the selected orders will have the same customer.

Instead of repeatedly accessing data on the database it is better to keep the information in an internal table and check first if the data has been retrieved already before the ‘expensive’ call to the database is made.

Code sample:

SELECT VBELN KUNNR FROM VBAK INTO (VBAK-VBELN, VBAK-KUNNR) WHERE …
IF VBAK-KUNNR <> T_KNA1-KUNNR.
READ TABLE T_KNA1 WITH KEY KUNNR = VBAK-KUNNR BINARY SEARCH.
IF SY-SUBRC <> 0.
SELECT SINGLE NAME1 FROM KNA1 INTO T_KNA1-NAME1
WHERE KUNNR = VBAK-KUNNR.
IF SY-SUBRC <> 0.
CLEAR T_KNA1-NAME1.
ENDIF.
INSERT T_KNA1 INDEX SY-TABIX.
ENDIF.
ENDIF.

ENDSELECT.
2.7. SELECT INTO TABLE
Reading an internal table is faster than reading a database table. Therefore, use internal tables to store information that must be accessed multiple times throughout a program.
Also, use internal tables when you have to read a header - item structure for which you would otherwise use nested SELECTs.

Instead of processing a

SELECT … INTO
APPEND ITAB
ENDSELECT

statement it is far more efficient to select the fields straight into the internal table and process the data from the internal table:

SELECT … FROM … INTO TABLE ITAB WHERE …
LOOP AT ITAB.

ENDLOOP.
2.8. SELECT FROM … ORDER BY
In most cases it is preferable to do the sorting within the ABAP program instead of on the database server that means: fill the internal table via a SELECT statement and then sort via the SORT statement instead of coding a SELECT … ORDER BY. The sorting of large amounts of data on the database server affects the performance of all users on the system, whereas the sorting within the ABAP program ‘only’ affects the application server. However, if an index exists on the table that can be used for the sorting then the SELECT … ORDER BY doesn’t cause any undue strains on the system.
2.9. Aggregate Functions
Instead of using ABAP, some calculations can be done by using aggregate functions for the SELECT. These are: SUM, AVG, MIN and MAX.
Example:
SELECT MATNR SUM (KWMENG) MEINS FROM VBAP INTO TABLE T_VBAP
WHERE … GROUP BY MATNR MEINS

This example will select the cumulative sales quantities grouped by material number and quantity unit.
2.10. SELECT without WHERE
Coding a SELECT statement without a WHERE condition is only allowed for very small tables (Example: customizing settings). For all other tables this is not permitted as performance problems will occur within a short period of time.
2.11. UPDATE
Instead of updating records within a SELECT … ENDSELECT construct

Example: SELECT * FROM ZVBAK WHERE VBELN IN S_VBELN.
ZVBAK-VKBUR = W_VKBUR.
UPDATE ZVBAK.
ENDSELECT.

Define your record selection in the UPDATE statement.

UPDATE ZVBAK SET VKBUR = W_VKBUR WHERE VBELN IN S_VBELN.
2.12. DELETE
The same consideration as for the UPDATE is true for the DELETE:
Instead of deleting records within a SELECT … ENDSELECT construct

Example: SELECT * FROM ZVBAK WHERE VBELN IN S_VBELN.
DELETE ZVBAK.
ENDSELECT.

Define your record selection in the DELETE statement.

DELETE FROM ZVBAK WHERE VBELN IN S_VBELN.

2.13. COMMIT
ABAP reports that issue INSERT, UPDATE or DELETE commands have to issue COMMIT WORK statements after a logical unit of work is completed. Missing COMMITs can lead to bottlenecks on the database side because the system has to keep track of the table changes via rollback segments (in order to enable a rollback of all changes since the last commit). Rollback segments are kept in memory and missing COMMITs can lead to overflows of the rollback segment areas. Also the database system holds locks on the changed records that are not released until COMMIT time.
2.14. Client Specific Queries
SAP Open SQL performs automatic client handling while executing a query. Therefore, the SAP Database Interface implicitly inserts the client (MANDT) that the user is currently logged on to, in to the WHERE clause of the query. The following example illustrates the above point:

Example:

SELECT column-list FROM table
WHERE column = value

The SAP Database Interface transforms this query and submits it to Oracle in the following form:

SELECT column-list FROM table
WHERE MANDT= '001'
AND column = value

Since MANDT is the first field of the Primary key index, the Optimizer performs an index range scan using the Primary key index to access the data, if no other appropriate indexes are found. But in some cases, specially in the Production environment, where there is only one client, a full table scan could provide better performance than accessing the table using the index, therefore the "Client Specified" clause is used to bypass the automatic client handling feature of SAP Open SQL.

Example:

SELECT *
FROM YABC CLIENT SPECIFIED
WHERE column = value

In the above query, the SAP Database Interface will not insert the current client in the WHERE clause. Since the Production environment has only one client, the query will still return the same rows as the query with a client in the WHERE clause, the only difference is that the index will be ignored and a full table scan done on YABC.

NOTE: In the development environments, you may need to add the following "WHERE" clause to the query to exclude the unwanted clients:

WHERE MANDT IS NOT IN ('255','400'...)
2.15. Joining Tables
Joining two Tables with ABAP/4 statement "FOR ALL ENTRIES"

To retrieve data from two (or more) related tables is normally done in ABAP/4 by using Nested Selects. For example: To retrieve all Customer Orders whose names begin with "T" and who are in the "AES1" Sales Organization one would join the Customer and Order tables using a Nested Select as shown below:

SELECT VKORG CUSHTOID CUSHTONM FROM ZCUSTOMER
INTO CORRESPONDING FIELDS OF INT_KUNDEN
WHERE VKORG = '0001' AND CUSHTONM LIKE 'T%'.

IF SY-SUBRC NE 0.
EXIT.
ENDIF.

SELECT SLSORDNR YYCUSHTOID FROM ZORDER
INTO CORRESPONDING FIELDS OF INT_ORDERS
WHERE VKORG = '0001'
AND YCUSHTID = INT_KUNDEN-CUSHTOID.
ENDSELECT.

ENDSELECT.

The above statement is very inefficient because a new SQL statement has to be executed for every single row found in the outer select.

A better way to achieve the same result is to use the "FOR ALL ENTRIES" statement in the inner SELECT statement as show below:

SELECT SLSORDNR YYCUSHTOID
INTO CORRESPONDING FIELDS OF TABLE INT_ORDERS
FROM ZORDER
FOR ALL ENTRIES IN INT_KUNDEN
WHERE VKORG = '0001'
AND YCUSHTID = INT_KUNDEN-CUSHTOID.
ENDSELECT.

View Complete Program
2.16. Maximum Value for a Field - An Optimized Approach
When executing a query, SAP Open SQL performs automatic client handling, where it implicitly inserts the value of the client (MANDT) that the user is currently logged on to, in to the WHERE clause of the query.

Therefore a query written as:

SELECT * FROM KNA1 WHERE KUNNR = 'XYZ'

is transformed by SAP for Client 800 to:

SELECT * FROM KNA1 WHERE MANDT = '800' AND KUNNR = 'XYZ'
A typical query for the Maximum value of a field, SELECT MAX(F1) FROM T1; would cause the Oracle Optimizer to look for an index with MANDT without checking whether the index contains the field or not. This could result in Oracle using the inappropriate index, thereby doing a full table scan or an index range scan followed by a table access.

The following two approaches (as applicable) may be used to solve the above problem and force Oracle to use the appropriate index:

The FIRST approach uses SAP Open SQL and should be used on relatively small tables with less than 100,000 rows. In this approach, the query forces the usage of the appropriate index containing MANDT and the Field (F1) on which the MAX function is applied because of the enhanced WHERE clause.
Example:

SELECT MAX(F1)
FROM T1
WHERE F1 > least_value AND F1 <= highest_value

The SECOND approach requires the use of Native SQL and should be used on large tables greater than 100,000 rows. In this approach, the data is selected directly from the index rather than the table by retrieving the record from the bottom of the index. This does not require a MAX function call or an 'actual' sort on the retrieved records.

The example below gets the Maximum value for the field FCBSPDDT from table YTPNUBD whose Primary Key is MANDT+FCBSPDDT+FCVRNO. It first checks whether the index exists or not. If it does, then it retrieves the Maximum value from the index, else it uses the unoptimized approach. The primary drawback with this approach is that the index name has to be hard coded in the query as a hint to Oracle. If the index name is changed, then either the query has to be changed to maintain optimal performance, else the performance may deteriorate.
Example:

SELECT null FROM DBA_INDEXES
2.17. Safe SQL Check
The following check table has to be followed to get the effective performance and action was given against each check. This can be used as a program performance review document check list

Check
Action
Done
Comments
Is the program using SELECT * statements?
Convert them to SELECT column1 column2


Are there SELECTs without WHERE conditions against large tables or tables which grow constantly (BSEG, MKPF, VBAK)?
Re-consider your program design!


Are CHECK statements for table fields embedded in a SELECT ... ENDSELECT loop ?
Incorporate the CHECK statements into the WHERE clause of the SELECT statement and control the access path with SDBE after the checks are imbedded.


Are there SELECT statements for which the execution path is INDEX RANGE SCAN of an index, which has MANDT as the first field and the other index fields are not (or only with gaps) specified in the WHERE clause? (Use transaction SDBE to check)
If you have to read less than 20% of the table's records or the table is small, try to apply WHERE criteria that are better supported by the index or discuss the creation of an index with DDIC support. If you have to read more than 20 % of the table, consider forcing a full table scan (s below).


Does the program use non obligatory SELECT-OPTIONS or PARAMETERS without default values in the WHERE clause (SELECT … WHERE field IN s_field)?
Make sure the select options fields that are used in the WHERE clause and support index access always have a value. Define them as OBLIGATORY, set a DEFAULT value and check what users enter (event AT-SELECTION-SCREEN). If they don't enter any criteria select only UP TO NN ROWS. NN can also be a parameter with a default value


Is there a SELECT that forces Oracle to touch or look at more than 20% of the rows?
Force a full table scan using CLIENT SPECIFIED, dropping the WHERE clause for MANDT (but not other where criteria) and read into an internal table


Is there a SELECT with 'CLIENT SPECIFIED'?
If this is used to force a full table scan (reading more than 20% of the table rows), make sure there is no MANDT in the WHERE clause. Other WHERE criteria should be provided to avoid network traffic)


Are there duplicate SQL statements to a table using the same WHERE criteria?
Read the data into an internal table and use binary search methods to retrieve it. Beware of internal table sizes (you might have to read with PACKAGE SIZE.. If the table is too large) and try to limit the width if possible.


Do SELECTS on non-key fields use an appropriate DB index or is the table buffered ?
Check with DDIC support, whether an index for the table or buffering would make sense


Is the program using nested SELECTs to retrieve data? (SELECT ….FROM MASTER. SELECT FROM DETAIL. ENDSELECT. ENDSELECT.)
Convert nested selects to SELECT xxx FOR ALL ENTRIES IN ITAB or into SELECT …. WHERE field IN range_table if the driving range table only has up to 200 entries. Beware of internal table sizes (you might have to read with PACKAGE SIZE.. If the table is too large) and try to limit the width if possible.


Does the program read data into an internal table itab, loops at itab and executes selects to retrieve detail information?
Convert nested selects to SELECT xxx FOR ALL ENTRIES IN ITAB. Make sure the WHERE clause is supported by an index. (S. also 'Efficient use of IN clause' on Enterprise Database Services website)


Is the program using FOR ALL ENTRIES IN itab WHERE?
1. Make sure it doesn't run against an empty internal table itab - it would return ALL records of the table!


Is the program using SELECT... APPEND ITAB ..ENDSELECT techniques to fill internal tables?
Change the processing to read the data immediately into an internal table (SELECT VBELN AUART... INTO TABLE IVBAK...) Beware of internal table sizes (you might have to read with PACKAGE SIZE... If the table is too large) and try to limit the width if possible.


Are there SELECT…INTO CORRESPONDING FIELDS OF TABLE ITAB statements?
If the structure of the fields you select and the internal table is the same, skip the 'CORRESPONDING FIELDS' clause


Are there SELECT…INTO TABLE.. Statements, which read HUGE tables into memory?
To avoid memory allocation problems, read in packages (SELECT.. PACKAGE SIZE) and use FREE itab to release allocated memory as soon as possible


Is the program using SELECT ORDER BY statements?
Data should be read into an internal table first and then sorted, unless there is an appropriate index on the order by fields


Are there SELECT…ENDSELECT statements to verify, whether there is at least one record, which matches a criteria?
Use SELECT... FROM...UP TO 1 ROWS to stop the query after the first match is found. Support the query with a proper WHERE clause so that an index can be used.


Are there SELECT MAX (field) …statements without WHERE clause?
If there is an index available for 'field', add WHERE criteria for 'field' ("field BETWEEN 0 AND 999999") to support index access


Are there statements like LOOP AT itab. MOVE ITAB TO DBTAB. UPDATE (INSERT) DBTAB. ENDLOOP?
Replace this by UPDATE (INSERT) DBTAB FROM itab or use UPDATE DBTAB WHERE…SET...


Are there INSERTS/UPDATES insight SELECT …ENDSELECT loop, which accesses a huge table?
Open a cursor with hold and use function module DB_COMMIT to save database changes after a certain number of records to avoid SNAPSHOT TO OLD errors


Is there a DELETE FROM dbtab WHERE field IN range_tab statement?
Make sure range_tab is never empty - otherwise all the rows of dbtab will be deleted!!


3. Miscellaneous
Moving data from one table work area/structure to another one with an identical structure.

Use:
MOVE structure 1 TO structure2

Rather than:

MOVE-CORRESPONDING structure 1 TO structure 2
3.1. Determining the length of a string.
Use:
fieldlength = STRLEN( field ).

Rather than:

IF field CP ‘#’.
ENDIF.

fieldlength = SY-FDPOS.
3.2. ASSIGN statement
If assigning the contents of a field belonging to a Dictionary defined table/data structure which itself contains a name of a field to a field symbol, use the ASSIGN TABLE FIELD syntax of the ASSIGN statement.

For example:

ASSIGN TABLE FIELD (KNA1-NAME1) TO .

is more efficient than:

ASSIGN (KNA1-NAME1) TO .

This is because the search for the field (in this case KNA1-NAME1) is carried out only in the Data Dictionary and not in the symbol table. The field must then be a component field of a database table declared with the TABLES statement. This improves the performance of this statement considerably. In contrast to the second method above, the performance does not depend on the number of fields used within the program.
3.3. Testing for Initial Value
The use of the IF statement for checking whether a field is empty (i.e. equal to the initial value for its data type) can be made more efficient by comparing the field with another of the same data type.

For example:

IF MARA-IDNRA = SPACE.
....
ENDIF.

Is more efficient than:

IF MARA-IDNRA IS INITIAL.
....
ENDIF.

But only for the first time the field is tested. This is because the IS INITIAL test, SAP must determine what data type the field being tested is and then determine what value it is checking for (e.g. space(s) for character fields, zero for numeric fields, etc.). After a field has been tested once, SAP remembers its data type and subsequent ‘IS INITIAL’ tests are equivalent in efficiency to testing against a field (or constant) of the same data type.
3.4. Testing one field for multiple values
When testing an individual field for multiple values, you can use:

IF field = value1.
....
ELSEIF field = value2.
....
ENDIF.
Or:
CASE field.
WHEN value1.
....
WHEN value2.
....
WHEN value3.
....
WHEN valuen.
....
ENDCASE.

The first method is more efficient when checking a field for up to about five values. But the improved readability of the program code associated with the CASE statement dictates that its use should be applied for levels of three or greater.
3.5. Optimizing IF and CASE structures
To optimize IF and CASE structures, always test values in order of the likelihood of each value occurring.

For example, fieldx can have values ‘A’, ‘B’, or ‘C’. A value of ‘B’ is the most likely value to occur, followed by ‘C’, then ‘A’; to optimize a CASE statement for fieldx, code the CASE statement as follows:

CASE fieldx.
WHEN ‘B’. “Most likely value
....
WHEN ‘C’. “Next most likely value
....
WHEN ‘A’. “Least likely value
....
ENDCASE.

Here, if fieldx has a value of ‘B’, only one test is performed, if it has a value of ‘C’, two tests must be performed, and so on.

Coding in this manner reduces the average number of tests performed by the program.
3.6. Subroutine / Function Performance
Because of the added overhead of calling subroutines, functions, etc., you should avoid the following style of coding:

Use:
IF field NE 0.
PERFORM SUB1.
ENDIF.

FORM SUB1.
....
ENDFORM.

Rather than:

PERFORM SUB1.

FORM SUB1.
IF field NE 0.
....
ENDIF.
ENDFORM.
3.7. Performing Calculations
When performing calculations in ABAP/4, the amount of CPU time used depends on the data type.

In very simple terms, Integers (type I) is the fastest, Floating Point (type F) requires more time, and Packed (type P) is the most expensive.

Normally, packed number arithmetic is used to evaluate arithmetic expressions. If, however, the expression contains a floating point function, or there is at least one type F operand, or the result field is type F, floating point arithmetic is used instead for the entire expression. On the other hand, if only type I fields or date and time fields occur, the calculation involves integer operations.

Since floating point arithmetic is fast on SAP hardware platforms, you should use it when you need a greater value range and you are able to tolerate rounding errors. Rounding errors may occur when converting the external (decimal) format to the corresponding internal format (base 2 or 16) or vice versa.

A note about Packed number arithmetic:

All Packed fields are treated as whole numbers. Calculations involving decimal places require additional programming to include multiplication or division by 10, 100, 1000, etc... The DECIMALS specification with the DATA declaration is effective only for output with the WRITE statement. If, however, fixed point arithmetic active (program attributes) is active, the DECIMALS specification is also taken into account. In this case, intermediate results are calculated with maximum accuracy (31 decimal places). This applies particularly to division. For this reason, you should always set the program attribute "Fixed point arithmetic".
3.8. Checking Return Codes
A program should test the system return code field (SY-SUBRC) after any statements that could potentially change its value unless the outcome of the statement is not important for subsequent processing.

The return code should always be checked after any database table read/update statements.

Be aware that many ABAP/4 statements will set the value of the system return code. It is advisable, therefore, to test the return code immediately after the statement whose outcome is important, rather than further along in the code.

Refer to Appendix A for a list of ABAP/4 statements/keywords that set the system return code and possible values as a result of these.

For SELECT ... ENDSELECT processing loops, the return code should be checked after the ENDSELECT statement to check for the success of the SELECT statement.

For LOOP AT ... ENDLOOP processing loops, the return code should be checked after the ENDLOOP statement to check for the success of the LOOP statement.
3.9. Sequential File Processing
An ABAP/4 program that reads from or writes to a sequential dataset should always OPEN the dataset before the read/write operation, and after file processing it should CLOSE the file. Although these statements are not mandatory, it is good practice to include them. Also, without an OPEN DATASET statement, a file will always be opened in BINARY mode. This may not be the mode required!
3.10. Hard-coding Text (messages, report output, etc.)
DO NOT! Wherever possible, all text to be passed to messages, or to be written to a report should be created as a Numbered Text (in Rel. 3.0 these are called Text Symbols). This allows text to be more easily maintained and supports the maintenance of these texts in other languages.

To increase ABAP/4 code readability, it may be useful to identify what text is being output / written by including a comment in the program code, or by specifying the Numbered Text as follows:

WRITE: / ‘This is some text’ (XYZ).

Where XYZ identifies a Numbered Text
3.11. Function Modules
· Prior to creating complex routines, check the Function Module Library for a Function Module that may perform the required function.
· Where several programs are doing similar processing (e.g. date calculations, updating the same tables, etc.), consider creating a function module to perform the processing. Future ABAP/4 programs can then execute this code using the CALL FUNCTION statement.
· “Don’t re-invent the wheel” if possible!
3.12. Guidelines for Using Transactions SE(NN)
In an effort to reduce unnecessary database CPU utilization in production systems, the database design group has created the following guidelines for using ad-hoc type SAP transactions like SE16, SA38, and SE38. The intent is not to eliminate the usage of these transactions but to reduce the number of inefficient database queries that can be generated by these processes. For details on other SE(NN) transactions visit Additional SE(NN) Guidelines.

1. Prepare
a. Know your table and the data within the table.

What indexes exist for the table? Check with your supervisor or functional analyst.

What indexed attributes identify small result sets? In contrast, are there certain fields for which the data in the entire table is roughly the same (e.g. a country code field for a table could very well be USA for every row in the table...this attribute would "not" help identify small result sets, instead it would bring back everything from the table)?

How large is the table, i.e. number of rows, width of rows, and so on?

b. Consider what's running.

Is this a critical processing period, i.e. month-end close? If so, wait until the system is less busy.
If possible, run your SE(NN) transactions in the evenings or weekends when the system is less busy.

2. Use the transactions responsibly
a. SE16

Design an efficient query:

Always enter in as many fields of data as possible. Especially important are date related fields. The more data you supply, the less work the database will have to do "behind the scenes", and the faster your results will be returned.

Never run an SE16 type query against a database view. If you're not sure if something's a "view", ask your supervisor.

As a general rule, if a field allows for single or multiple entries, try to stick to using the single entry. Multiple value fields can eliminate the effectiveness of an index.

Only retrieve data that you absolutely need (e.g. if you only need data from one Organization, supply that field in the SE16 entry screen, versus using an '*' to pull back all the possible entries for that field)

Most importantly, if you have any doubts on how the query will perform, run the query in a large test environment to test performance before invoking it on production data.

b. SE38 / SA38

Run a program only if you are familiar with it.

Do not execute a program that may run for more than a few minutes

3. Background versus dialog mode

Consider running programs in background mode. Basically there are two ways to run programs in AP--interactively, in what's known as "dialog" mode, or via "background" mode. When programs are run in dialog mode, the program will run and tie up a user's SAP gui session. This will continue until either the program finishes or is cancelled. From the user's perspective, the SAP session will be busy and unusable until it finishes.

Programs run in dialog mode are intended to be quick ones. Period. Therefore, certain restrictions are in place, including an automatic "time out" for programs running more than 30 minutes. If that time limit is exceeded, the user's session will be terminated, and no results will be returned. Starting additional programs, in this case, one after another will not help. They will time out as well and place an unnecessary load on the system.

Longer running programs need to be run via background mode. There are no automatic "time out's" for programs run this way. Not all users have access to run jobs in this manner. Please work with your supervisor to request this access if it is deemed applicable.

4. Helpful Hints

When running these transactions, be aware that it is possible for you to cancel, close, ctl-alt-delete, or even power off your computer, and the program can still be running, behind the scenes, on the database! If your "end" of the SAP GUI has been closed in this manner, then SAP will never be able to return the results back to you. Nonetheless that transaction will still be consuming resources on the database, affecting the performance of everything else running in the system.

Be aware that an SE16 transaction can be killed at the SAP level and still be running in the database.

Run these transactions during off-peak hours.

Run the transaction that will return only the information you want.

5. If a query ever gets away from you, make sure it's cleared from the system! If the transaction is still running after 10 minutes, you've likely issued an inefficient query and are adversely affecting production applications. Use SQL Trace and see where the problem is and redesign the SQL.

Don't try the query again until you understand what happened and how to prevent it in the future.
3.13. SAP/SQL Performance Tips
1. KNOW THY DATA! Be ready to answer the following questions about your data:
· What is the cardinality?
· What does the user do with the data?
· Which columns have good selectivity?
· Which columns are accessed very frequently?
· Do you always use functions when you access the column?
· Which tables are always used together?

2. GIVE THE DATABASE A BREAK WHENEVER POSSIBLE. The fastest query is the one that was never issued

3. AVOID NETWORK TRAFFIC BY MINIMIZING DATA TRANSFERED
Transfer data that you really need
Use "SELECT SINGLE or UP TO ? ROWS"

4. UNDERSTAND HOW THE DATABASE SELECTS AN ACCESS PATH AND EXECUTES A SQL STATEMENT.

5. MINIMIZE THE NUMBER OF DATABASE ACCESSES.
Bundle your requests to the Database.
Read data into internal tables and sort the data on the Application Server.

6. CREATE SEPARATE MODULES FOR SQL STATEMENTS OR USE VIEWS FOR FREQUENTLY USED SELECT STATEMENTS.
Promotes execution of SQL statements that look exactly the same

7. AVOID ANY UNNECESSARY FULL TABLE SCANS.
Verify that the correct index is being used and less than 20% of the rows are retrieved.

8. INCLUDE ALL INDEX FIELDS IN THE "WHERE" CLAUSE.
Do not specify Index fields using: NOT, LIKE '%pattern', IS NULL, IS NOT NULL, IN, NOT IN, !=

9. PERFORM A FULL TABLE SCAN IF THE TABLE IS SMALL OR MORE THAN 20% OF THE ROWS ARE RETRIEVED.
Delete the index, change the "WHERE" clause, or add "CLIENT SPECIFIED" and omit the "WHERE" clause to suppress the index.

10. DO NOT DEFINE MORE THAN FIVE INDEXES PER TABLE.

11. FOR COMPARISONS USE "=" COMBINED WITH ANDs, IN THE "WHERE" CLAUSE

12. TRANSFORM "IN" STATEMENTS INTO AND/OR STATEMENTS.

13. USE "SELECT? FROM INTO TABLE INT_TAB" INSTEAD OF "SELECT?ENDSELECT".

14. CHECK WITH THE DB DESIGN GROUP IF TABLE BUFFERING CAN BE USED FOR CODE/DECODE TABLES.

15. USING AN AGGREGATE TABLE IS BETTER THAN USING A "GROUP BY" STATEMENT.

16. USE ARRAY OR SET OPERATIONS TO CHANGE TABLES INSTEAD OF "MODIFY".

17. AVOID ASYNCHRONOUS UPDATES.

18. AVOID DYNAMIC SQL.

19. USE CURSORS TO COMMIT TABLE CHANGES DURING LONG RUNNING "SELECTS" (FUNCTION - DB_COMMIT).

20. ALWAYS CREATE INDEXES FOR MATCHCODE IDs AND APPLY SELECT CRITERIA OR SET/GET PARAMETERS TO LIMIT THE NUMBER OF ROWS RETURNED.

21. AVOID LOGICAL DATABASES IF AT ALL POSSIBLE BECAUSE THEY ARE NOT USUALLY TUNED; IF YOU HAVE TO USE THEM, WRITE PROPER STATEMENTS TO ACCESS THEM.

22. TO LOCK SEVERAL TABLE ROWS DO NOT CALL THE "ENQUEUE" FUNCTION FOR EACH ROW, BUT USE GENERIC LOCKS INSTEAD.