SAP On SQL Server ..
SAP On SQL Server ..
SAP On SQL Server ..
Welcome, Guest
Login
Register
9 Posts
SCN Gamification
Posted by Eduardo Rezende Apr 29, 2013
What do you think about the new SCN Gamification? If this is a complete strange topic for you, check Chip's blog:
Were LIVE with #SCN Gamification! #SCNGameOn Personally, I think this new platform will help the community. Do you have suggestions about any particular badge (or mission) we should have here at I already got some badges, like: I Was Here First Steps I Shared Some Knowledge! Which badges did you got? Any particular badge you are looking for?
49 View s
0 Comments
Tags: scngameon
Quite often someone asks me how an external SQL Server database can be accessed by an SAP system, e.g. to: Access data in an external SQL Server database with the SAP system Report against data in an external SQL Server database with Business Intelligence / Business Warehouse Use DBACockpit to monitor an external SQL Server instance Depending on: Which operating system your SAP application servers run on Which purpose you want to use the connection for Which type of SAP application servers (ABAP, Java, Dual-stack) are available in the SAP system There are different connection types, technical requirements and restrictions. This blogpost clarifies the possibilities and restrictions and covers frequently asked questions: 1. Options and technical requirements to access an external SQL Server database 2. How to setup a connection with UDConnect 3. How to setup a connection with DBCon / Multiconnect 4. How to monitor an external SQL Server Database using DBACockpit 5. Troubleshooting
scn.sap.com/community/sqlserver/blog
1/16
5/11/13
UDConnect (Universal Data Connect)
Regardless of the way you choose you can only connect to remote databases which are reachable via network from your SAP Application Server. DBCON / Multiconnect DBCON / Multiconnect uses the Microsoft SQL Server Native Client Software (SNAC) to establish a connection to the remote SQL Server instance. The Microsoft SQL Server Client Software for Windows consists of several *.dll files. For long time it was available for Windows platforms only. Recently, Microsoft ported its ODBC SQL Native Access driver to Linux. For this reason heterogeneous Linux/Windows scenarios are now possible. DBCON utilizes the SAP ABAP stack to access the external databases so your system requires at least one ABAP-stack-based SAP Application Server running on Windows or Linux x86_64. UDConnect UDConnect uses a JDBC (Java Database Connectivity) driver to establish a connection to the remote SQL Server instance. The JDBC driver consists of one or more *.jar files and can be used on Windows, Unix and Linux operating systems. As UDConnect utilizes the J2EE engine of the SAP Application server to access the external databases you need to have at least one Java-Stack-based SAP Application Server in your SAP system in order to use UDConnect. Connectivity Matrix Windows Java Stack ABAP Stack Dual Stack UDConnect DBCon UDConnect DBCon Linux x86_64 UDConnect DBCon UDConnect Unix UDConnect none UDConnect
Remarks: If your system comprises solely of ABAP stack-b ased servers running on Unix platforms you can neither use UDConnect nor DBCON. Why? Because UDConnect requires at least one Java-stack b ased SAP Application Server (regardless of the operating system) and DBCON requires at least one Windows- or Linux x86_64-b ased SAP Application Server. Using DBCon on a Linux x86_64 b ased application server can only b e used to connect to SQL Server versions 2005 and higher. Predecessor releases are not supported b y the Microsoft driver. Furthermore, the driver is only supported for Red Hat Enterprise Linux 5.x and higher and for Suse SLES11 SP2 and higher.
SAP DBSL for Windows DBCON utilizes the ABAP-stack to connect to an external database. The ABAP-stack itself requires the Database Shared Library (DBSL) to communicate with a database. For each Relational Database Management System (RDBMS) supported by the ABAP-stack there is a separate DBSL provided by SAP. To install the DBSL:
scn.sap.com/community/sqlserver/blog
2/16
5/11/13
SAP DBSL for Linux x86_64 Please see SAP note 1644499 if you need to download and install the SAP DBSL for Linux x85_64-based servers. The note describes how to request the DBSL and also explains in detail which steps are required to properly set it up.
DBCON entry The DBCON entry informs the ABAP-stack where to find the external SQL Server Database and how to authenticate. Please see SAP note 178949 to learn how to create a DBCON entry for an external SQL Server Database. Microsoft SQL Server Client for Windows The SQL Server native client is used to establish the connection to the external SQL Server instance. To install it you need to run the sqlclni.msi installation package which is available from the SQL Server installation DVD / CD, or from the Microsoft Software Download website. Microsoft ODBC Driver for Linux x86_64 SAP note 1644499 explains in detail where to download the Linux x86_64 - based ODBC driver and how to install it.
5. Troubleshooting
No shared library found for the database with ID <DBCON_entry_name> or Unable to find library <kernel_directory>/dbmssslib.sl'. -> DLENOACCESS (0,Error 0) or ERROR => DlLoadLib()==DLENOACCESS - dlopen ("/usr/sap/<SID>/DVEBMGS00/exe/dbmssslib.so") FAILED or could not load library for database connection <DBCON_entry_name> or cannot open shared object This error indicates that the ABAP stack could not find the SAP DBSL for SQL Server (dbmssslib.dll) in the kernel directory. If you encounter this error on a Unix - based server the root cause is clear: the DBSL does not exist for other platforms than Windows or Linux x84_64. In this case use a Windows-based or a Linux x86_64-based SAP Application Server to establish the connection. If your system does not contain a Windows-based or a Linux x86_64based Application Server you need to setup a small one as workaround. If you encounter this error on a Windows Application Server or a Linux x86_64 based Application Server make sure that the DBSL is properly installed in the kernel directory as explained in point 3. B Wed Jan <timestamp> B create_con (con_name=<dbcon_name>) B Loading DB library '<kernel_directory>\dbmssslib.dll' ... M *** ERROR => DlLoadLib: LoadLibrary(<kernel_directory>\dbmssslib.dll) Error 14001 M Error 14001 = "This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem." B *** ERROR => Couldn't load library '<kernel_directory>\dbmssslib.dll' B ***LOG BYG=> could not load library for database connection <dbcon_name> The DBSL could be found successfully in the kernel directory but there was a problem while loading it. This can have various reasons. To ensure that the file itself is not corrupt please download and install the file from scratch as explained in point 3. If the error remains afterwards please check the OS Log for further
scn.sap.com/community/sqlserver/blog
3/16
5/11/13
errors at the time of the error.
The Microsoft runtime DLL's which are required by the DBSL are missing on your server. Please install them as explained in SAP Note 684106. Could not find stored procedure 'SAPSolMan<version>.sap_tf_version'" DBACockpit uses stored procedures to collect monitoring information from a database. These stored procedures need to exist in the database that is being monitored. If you are using the connection for a purpose other than remote monitoring with DBACockpit you can ignore this error. If you want to remote monitor the SQL Server database please make sure that you've configured the connection exactly as described in the configuration guide referenced in point 4. Then you need to create the missing stored procedures in the remote database. To do so open transaction DBACockpit in the monitoring system, use the "System"-Dropdown field to select the remote SQL Server system which you want to monitor -> go to Configuration -> SQL Script Execution. If the monitoring schema is missing in the remote database you will be offered a button called "create/repair schema". After using it to create the schema you will be offered a button called "Execute script(s)". Click on it to create all required monitoring Stored Procedures in the remote database. You want to update the JDBC driver used by your UDConnect connection Follow the instructions in SAP Note 1009497.
1692 View s
1 Comments
Hi again, In my last blog post I've already discussed a major topic for SQL Server databases, the common misconceptions. Now I want to elaborate on another topic which I come across very frequently... PERFORMANCE Performance-tuning is a very complex domain - good and deep knowledge and understanding of how SQL Server works is required to tune. For this reason it's simply impossible to quickly deal with all facts and details you need to thorougly look into every single corner of your database that could be tuned. Why am I still writing a blog post about it then? Because I very often see SQL Server-based SAP systems where little effort could improve performance so much and many of the tasks which I'll talk about can even be carried out without a downtime. For this reason I always find it too bad if I look at a system and see that these basic tasks were not carried out. I have the impression that some SAP recommendations for SQL Server databases which were communicated via SAP Notes within the last couple of years are still not so wellknown yet for some reason so I want to seize the opportunity and broadcast them as these are general ones... they are not supposed to be followed in special cases but they should be followed in any case... As for my last blog post I have again written a KBA which contains everything I want to share while I again post the initial version of it here for those of you who don't have access to SAP Notes and KBAs. SAP KBA 1744217 - Basic requirements to improve the performance of a SQL Server Database Points 2, 3, 4, 5, 8 and 9 don't even require a downtime so you can go ahead and apply them right away. Point 3 will cause some load for large objects and should therefore be carried out when the overall system load is low and you're able to monitor it. Small tables can be compressed quite quick and won't cause considerable load. It's a good idea to simply test it on a handful of tables with different sizes so you can see how long it takes in your system. You'll be astonished how much space (and thereby indirectly I/O accesses) page-compression will save you.
(2) Statistics
scn.sap.com/community/sqlserver/blog 4/16
5/11/13
If you follow point 7 SQL Server itself will take care of automatically updating statistics. Please do not schedule any additional statistics updates unless SAP explicitly recommends you to. Besides the automatic statistics update, please implement SAP Note 1558087.
1. Goto transaction SE38 or SA38 2. Start report MSSCOMPRESS 3. Set the Data Compression Type and Index Compression Type Filter Options to Not compressed
msscompress.png
4. Wait for the table list to be refreshed 5. If uncompressed objects are found, follow SAP Note 1488135 to page-compress them. Note that you can choose between: Always ONLINE ONLINE, retry OFFLINE Always OFFLINE Please be aware that compressing an object will implicitly require to lock the object being compressed at certain times. If you use the online option SQL Server will use as few locks as possible. If you use the offline option, the object will be locked and will not be available for access until the compression has finished. For large objects compression can take a while for this reason ensure to use the first option if you want to avoid this. For tables which contain columns with data type image, text, ntext, varchar(max), nvarchar(max), varbinary(max), and xml, an online compression is not possible with SQL Server Releases lower than SQL Server 2012. Please consider this when planning the compression of your database.
(5) Datafiles
To ensure that the data can be distributed over all existing data files, it is important that all data files provide free space at all times. Please follow SAP Note 1238993 to ensure that your data files are configured correctly. It's also recommended that you have ~ 0.5 - 1 datafiles per CPU core (e.g. if your SQL Server can use 4 CPU cores, 24 datafiles make sense). If you are using a BW system it makes sense to have the same number of datafiles for the tempdb.
5/11/13
As of SQL Server 2005 it is possible to disallow the operating system to page out pages allocated by SQL Server to the page file. As a major part of the main memory allocated by SQL Server is the Data Cache it is important that it is not being paged out. Otherwise it would in the end be read from disk (the page file) instead of from the main memory which decreases performance. Please follow SAP Note 1134345 to make sure that you are using the lock pages in memory feature.
(7) Parameters
Please make sure that the database parameter are set as recommended in SAP Notes: 327494 - SQL Server 2000 879941 - SQL Server 2005 1237682 - SQL Server 2008 1702408 - SQL Server 2012
(8) sp_autostats
We recommend to switch on sp_autostats for all objects in the database in order to leave the task of updating statistics to SQL Server. For some tables we've experienced better performance if the automatic statitsics update is switched off. To correctly configure these for your database release, please follow SAP KBA 1649078.
1149 View s
1 Comments
... wondered why SQL Server behaves to weird Have you ever asked yourself questions like: why is my transaction log running full if I'm already using recovery model simple? how often should I update the statistics of the database objects? how often should I reorganize or rebuild tables and indexes? why is the timestamp of the optimizer statistics for some objects not new if my Update_Tabstats job runs frequently? why is so much data missing in some tables after I used repair_allow_data_loss to repair database inconsistencies? why does DBCC CHECKDB or DBCC CHECKTABLE still find inconsistencies when I've already used repair_allow_data_loss? why are my datafiles not growing the way I expect them to even though I've configured the files to autogrow? why is my database occupying less space after I've archived so much data? why is my table not occupying less space after I've deleted so many rows from it? why is the result of my query not ordered anymore even if it always used to be? Bad news first: My experience says: NO, to 99,99999 % what you see is NOT a bug. Instead, there's simply a gap between what you think how it's supposed to work and how Microsoft designed it to work. ... and now the good news: YES, there IS a comprehensive explanation to your: but why?!?! questions and finally you get all the anwers at once
To clarify all these frequent misconceptions I released: SAP KBA 1660220 - Microsoft SQL Server: Common misconceptions For those of you who don't have access to SAP Notes and KBAs I once paste the current content of the note here ....
scn.sap.com/community/sqlserver/blog
6/16
5/11/13
If you come across similar topic, let me know and I'll try my best to cover them as well. Regards, Beate ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
1. The SQL Server Agent job Update_Tabstats updates the database statistics which are used by the database optimizer to calculate execution plans and therefore it is critical for performance if the job fails.
Job SAP CCMS_<sid>_<SID>_Update_Tab stats does not touch the optimizer statistics at all and has no influence on execution plans. Instead, it is part of the database monitoring framework implemented and provided by SAP. It is not natively included in a Microsoft SQL Server installation, but is developed and delivered by SAP and was introduced with Basis Release 7.00 SP12. The job executes stored procedure sap_update_tabstats. It collects meta information about database objects (e.g. keyfigures like the number of table rows, the reserved size of an object, the row modification counter, and many more) and stores them persistently in the database. As SQL Server does not keep any history for such keyfigures, SAP collects and stores them with this job in order to make historical information about database objects available. This allows to analyze how certain properties of database objects change over the time and serves as source of information for SAP DB monitoring transactions (e.g. DBACockpit, fastest growing tables, ...). Also see SAP Note 1178916 for more information.
2. When using recovery model simple the transaction log cannot run full.
The transaction log of a database consists of one or more files. You can decide for each file if you allow SQL Server to autogrow the file if required or not (autogrow on/off). With recovery model simple you ensure that SQL Server will truncate the log at each checkpoint - still, this doesn't mean that the transaction log cannot run full. Imagine you have a very long running transaction and all transaction log space is consumed before the transaction reaches the point in time where it commits. In such a case the transaction log can run full even if you are using recovery model simple. To resolve you need to take a closer look at the transactions - is there a long running transaction which keeps SQL Server from truncating the log? Is it normal that this transaction takes so much time or is it caused by a wrong executiong plan, bad I/O or any other performance-degrading issue? To understand this in detail you need to make yourself familiar with how transaction log truncation works, meaning: which parts of the transaction log are considered active and which are considered inactive when the truncation is carried out. Parts cannot be truncated as long as they are still active. For a detailed example please see SAP Note 421644.
3. The result set of database accesses is always ordered by the primary key even if no ORDER BY clause is used explicitly.
scn.sap.com/community/sqlserver/blog
7/16
5/11/13
The result set is only ordered by the key if SQL Server uses the primary index for the access. Since the database optimizer chooses the access path dynamically based on existing statistics, you cannot assume that SQL Server ALWAYS uses the primary index for certain accesses and therefore cannot rely on an ordered result. If you require the result of a query to be ordered you must use an ORDER BY clause.
4. If database accesses hang for a long time, the problem is caused by a deadlock.
This assertion is incorrect. Genuine deadlocks (in other words, the mutual blocking of several transactions) are quickly recognized by Microsoft SQL Server, and eliminated by canceling one of the blocking transactions with an SQL Error 1205 within seconds. Database accesses that hang for a long time may have a wide variety of causes (blocking locks or suboptimal execution plans for example), but are not deadlocks. A good starting point to analyze why certain actions hang is creating snapshots of the current situation with hangman. Refer to SAP Notes 948633, 541256 and 806342. If you really encounter a deadlock (which becomes evident by the occurence of an SQL Error 1205), you can analyze it in more detail by following SAP Notes 111291 and 32129.
5. Updating Microsoft SQL Server database statistics manually, with a SQL Server Agent job or a SQL Server Maintenance Plan is required on a regular basis.
Updating statistics on a regular basis is important but with SQL Server there is no need to schedule this task explicitly! As long as the autostats feature is properly enabled, Server automatically detects if statistics need to be updated and carries out this task for you. The decision if statistics are considered out of date depends on several factors like the number of rows in the table and the number of rows modified since the last statistics update. For details on the algorithm please see: Statistics Used by the Query Optimizer in Microsoft SQL Server 2000 Statistics Used by the Query Optimizer in Microsoft SQL Server 2005 Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 To ensure that the automatic statistics update is enabled correctly, please refer to the configuration note for your SQL Server release: SQL Server 2000: SAP Note 327494 SQL Server 2005: SAP Note 879941 SQL Server 2008: SAP Note 1237682 SQL Server 2012: SAP Note 1702408 Bottom line: Don't update optimizer statistics manually for any object (or the whole database) unless SAP explicitly asks you to do so. It will produce I/O load and will not have any benefit. The only exception to this rule are tables which contain date information. To ensure proper statistics for those tables at any times you need to follow SAP Note 1558087 and schedule an update job for such tables.
6. Reorganizing some or all database objects is a required maintenance task and should therefore be carried out on a regular basis.
A bad overall performance or a bad performance of single database operations is often believed to be caused by the fragmentation of tables and indexes. As a solution, reorganization or rebuild appears to be the cure. This might apply for other relational database management systems, but for SQL Servern in most situations both, the assumption that the bad performance is caused by fragmentation, and trying to solve the problem by reorganizing or rebuilding database objects, are false. For this reason, SAP explicitly recommends not to reorganize or rebuild any database objects on a regular basis. You should not even reorganize or rebuild objects as an attempt to solve a performance problem as long as it is not evident that fragmentation is the root cause of the problem (which it hardly ever is). Please see SAP Note 159316 which explains this topic in more detail.
7. DBCC CHECKDB and DBCC CHECKTABLE with option repair_allow_data_loss allow you to repair database inconsistencies and will not cause any data loss.
The repair_allow_data_loss option is not a tool which can perform magic to retrieve back data from pages which are physically damaged or contain logically incorrect information. Instead, it does more or less exactly what its name says: it tries to retrieve as much data as possible and will discard as much data as required to return to a consistent version of the affected object(s). It is important to understand that database inconsistencies are in almost all cases caused by malfunctions on lower layers (typically hardware or driver malfunctions). This means that due to a malfunction on these lower layers one or more database pages are damaged - meaning their content is not fully correct anymore to a certain extent. There are various types of database inconsistencies e.g. pages might not linked properly anymore, links between pages might be missing completely, pages from the allocation maps (GAM, IAM, SGAM) might contain incorrect data, pages might
scn.sap.com/community/sqlserver/blog
8/16
5/11/13
be damaged to an extent that they do not even have the physical structure of a SQL Server page anymore. If you are very very lucky this affects a page which was cached for faster access in your main memory and the inconsistent page hasn't yet been written back to the disk. This is what we call a transient inconsistency but unfortunately an inconsistency is hardly ever a transient one. In most cases the inconsistent pages are in the database files or in the log files. This means the incorrect information is on disk and there is no proper version of the affected page(s) anymore. This should make it clear why you cannot simply "recover" from an inconsistency. An inconsistency is a damaged page - there is no way to make the database guess what the correct content of an inconsistent page would have been and to let the DB simply revert the page content back to the correct version. In most cases you will have more than one inconsistency. In order to judge how bad the situation is you need very exhaustive knowledge of SQL Server to understand which kind of pages (e.g. index pages, data pages, leaf pages, allocation map pages) are affected and which impact this has. Using repair_allow_data_loss in order to let the database discard everything that cannot be interpreted or read properly anymore is no solution - in most cases it will even make things worse and still there is no guarantee that this option will even be able to recreate a consistent version of the affected object(s) - despite accepting data loss. You might still have inconsistencies left afterwards as depending on how bad the situation is, it might not even be possible anymore to return to a physically consistent state. On the other hand and much more important: this leads to completely uncontrollable, unpredictable dataloss and there is no way to log or trace what is thrown away. You will have data loss and this will cause inconsistencies on SAP application level (usually SEVERE inconsistencies). For these reasons, SAP does not support the usage of repair_allow_data_loss. See SAP KBA 1704851 and SAP Note 142731 for further details.
8. After archiving or deleting data from a table the table and its indexes will occupy less space in the database and the database itself will also occupy less space.
There are different key figures which inform you about the space consumption of an object (reserved size, data size, index size, unused size). If you have a large table and you delete a large amount of data from the table (e.g. by reorganizing the entries from application level or by archiving) SQL Server will not release the freed space back to the data files. Instead, it will keep the freed space reserved for the object. If you really have the need to release the space back to the datafiles and to then release it back to the filesystem, please refer to SAP KBA 1721843 for more details. If you are not urged to gain back the space on filesystem level, SAP recommends to simply leave the object as it is. SQL Server will reuse the freed space as soon as new entries are inserted into the table.
9. If the autogrow option is configured for all datafiles Microsoft SQL Server will grow all files in a balanced way.
The assumption that MS SQL Server grows files in a balanced way if autogrow is switched on for all files is a common misconception. SQL Server uses a proportional filling algorithm to distribute new data over all existing datafiles. This is described in more detail in SAP Note 1238993 - even though the note explicitly mentions SQL Server Release 2008 it works the same way for all releases. Briefly explained: if new data needs to be added to the database, SQL Server distributes the new data over all datafiles which still provide free space. SQL Server will not grow any file as long there is at least one datafile which still provides free space. Even if autogrow is configured for all datafiles SQL Server will wait until ALL files are full - only if all datafiles are completely full and SQL Server needs to add new data it will autogrow files. If you are using SQL Server release >= 2008 and have set trace flac 1117, SQL Server will grow all existing datafiles with autogrow=on at an autogrow event. In any other case, SQL Server will grow a single file only! All new data will then go into this single grown file until this file is full again. Then, as soon as new data needs to be added again, SQL Server will repeat the previously explained procedure and will grow a single file only. In order to allow MS SQL Server to use all files at any time, we strongly recommend you to make sure that you always have free space left in all existing datafiles. If some of your datafiles are currently full, extend them if your disk layout allows it. For SQL Server Releases >= 2008 Microsoft provides trace flag 1117 which makes SQL Server grow all files instead of growing a single file only at an autogrow event. This reduces monitoring efforts to ensure proper data distribution. Please see note SAP Note 1238993 for more details and to learn how to set the trace flag.
800 View s 1 Comments Tags: sqlserver, database, mss, microsoft, db, nw os, db_administration, db_development, db_maintenance
scn.sap.com/community/sqlserver/blog
9/16
5/11/13
Merge Command Backup Compression (optional) Transparent Data Encryption (optional) Changed Data Capture (optional) Star Join Optimization (BW) Grouping Sets (BW) Parallelism for partitions (BW) Row and Page compression
Increased speed of partition drop: 15000 Partitions (SQL Server 2008 SP2 - feature not yet available in SQL Server 2008 R2). Unicode compression (SQL Server 2008 R2)
Decrease in backup size up to 65 %. This depends of course on the content of the data. CPU usage will increase during the backup process. The more data can be compressed the more increase in CPU usage. Faster backup speed (25%) because it requires less disk I/O
scn.sap.com/community/sqlserver/blog
10/16
5/11/13
Benefits:
SAP supports partitioning only for specific tables in SAP BW. In BW 7.00 and newer releases, the F-fact table of an SAP BW cube is automatically partitioned by the packet dimension. Each time a new request is loaded into the cube, a new partition is created on the F-fact table. Typically customers load data once a day or less. Therefore 1,000 partitions are sufficient for almost 3 years. Furthermore, you can reduce the number of partitions by performing the SAP BW cube compression (which you should not confuse with SQL Server data compression). However, some customers loaded data several times a day, which resulted in hitting the 1,000 partition limit quickly. The 1,000 partition limit also was a pain during migrations of SAP BW systems from ORACLE to SQL Server. ORACLE supports much more than 1,000 partitions since years. Therefore we often see SAP BW systems on ORACLE, which already have more than 1,000 partitions. We had this particular scenario in mind when we decided to set the new limit to 15,000. In practice more than a few thousand partitions make no sense. Having tens of thousands partitions will not increase the overall system performance. It will very likely decrease it.
840 View s
0 Comments
We had privilege to deliver SQL 2012 First Customer Shipment project, using SAP Migration Standard Tools. In May, 1 we had Production system Go Live and I'd like to share benefits so far.
OS Source: Windows Server 2003
scn.sap.com/community/sqlserver/blog
11/16
5/11/13
Microsoft Case Study http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?casestudyid=710000001454 Links for more information about SAP Applications on MS SQL Server 2012 and Case study. http://www.microsoft.com/casestudies/Microsoft-SQL-Server-2012-Enterprise/Wei-Chuan-Foods-Corporation/FoodManufacturer-Keeps-Competitive-Edge-and-Improves-Support-for-Growing-Business/710000000330 http://blogs.msdn.com/b/saponsqlserver/archive/2011/11/17/microsoft-s-sap-deployment-and-sql-server-2012.aspx http://www.microsoft.com/casestudies/Microsoft-SQL-Server-2012-Enterprise/Microsoft-Information-TechnologyGroup-MSIT/Microsoft-Uses-SQL-Server-2012-5.8-Terabyte-SAP-ERP-Database-to-Run-Its-GlobalBusiness/710000000346 http://blogs.msdn.com/b/saponsqlserver/archive/2012/03/29/sql-2012-is-released-amp-running-live-at-wei-chuanfoods-taiwan.aspx SAP Notes: 1651862 - Release planning for Microsoft SQL Server 2012 SQL Server 2012 Technologies for SAP Solutions http://ecohub.sap.com/api/resource/4fabc95ad2a87c2a63d2b792
scn.sap.com/community/sqlserver/blog
12/16
5/11/13
1262 View s 2 Comments Tags: sqlserver, database, mss, ms_sql_server_2012, migration_to_ms_sql_server, sap_application_on_ms_sql_server_2012
In my previous blog (
How to Decrease Your SAP Database Size?) I gave all known options to decrease the Total
DB size of SAP Solutions. The fastest and may be the cheapest method is using an alternative database software. The Dino's in IT still doesn't believe in databases other than Oracle, but nowadays it is easy to say that they are wrong. With the 64-bit technology evolution, all databases performs quite well. This weekend, in Turkey, at one of the biggest Retail customers, we did an export and import of ERP production system which is already running Windows 2008/Sql Server 2008. But their database doesn't have ROW and PAGE compressions. So we decided to export and import the database to lower DB size.
Results
Total DB Size (prior to operation): 5200 GB Total DB Size (after SQL 2008 R2 conversion*): 1000 GB Export/Import Duration (done in parallel): 60 Hours Space BeneFIT: ~ 81 % --------------------------* SQL 2008 R2 Conversion: Here, the meaning of conversion is exporting the database and importing again on a SQL 2008 R2 version database with latest SAP Kernel 7.0 to enable ROW and PAGE compressions.
613 View s
6 Comments
Client side libraries used by SAP/ABAP for SQL Server 1. Interface to the Database
In order to store and retrieve data from a database, a program needs to use an interface that is dependent on that particular DBMS (Database Management System). As of R/3 kernel Release 4.5A, the database dependent part of the R/3 database interface is stored in a separate library, the DBSL, and so the ABAP implementation consists of: The DBSL (Database Shared Library), a database-dependent part of the SAP kernel that is dynamically linked to the SAP kernel. The database client tools, i.e. some libraries that are usually provided by the database manufacturer. These are either statically or dynamically linked to the database library. Note that a Java stack also needs an interface, but it uses a completely different technology (the JDBC). This is beyond the scope of this article. 1.1. SAPs DBSL The ABAP stack of an SAP system installed on a SQL Server database needs such an interface to access the <SAPSID> database; in the same way it is necessary in order to access an external SQL Server through a DBCON (Multiconnect) connection, e.g. to use the external database as a data source in an SAP BW system, or to centrally monitor several databases of your corporate landscape from an SAP Solution Manager (transaction DBACOCKPIT). The dynamic link library that SAP delivers so that an ABAP stack is able to connect to a MS SQL Server database is called dbmssslib.dll (check note 400818 for further details on this naming convention). It is distributed with the kernel, but you can also download it separately from the SAP Service Marketplace (LIB_DBSL.SAR, on the database dependent part of the kernel). It must be installed in the ABAP kernel executable directory (DIR_EXECUTABLE) of the SAP application server that is to access the <SAPSID> database (or any other SQL Server external database). Note that the fact that it is a DLL implies that it can only be installed on a Microsoft Windows platform; as the Microsoft Data Access technologies are not available on other platforms, SAP did not implement any other dbmssslib for non-Windows platforms; this implies that you cannot access from an SAP application server running e.g. on a Unix host to a SQL Server database directly. 1.2. Loading the Database interface at startup When initiating an SAP system, the database-dependent database library is loaded before the DBSL is called for the first time. The system searches for the library in the directory indicated by the environment variable DIR_LIBRARY (e.g. /usr/sap/<SAPSID>/SYS/exe/run). The environment variable dbms_type contains the name of the required database management system. When the system is initiated, an attempt is made to load the library belonging to the required database management system from the directory indicated by the environment variable DIR_LIBRARY.
scn.sap.com/community/sqlserver/blog
13/16
5/11/13
Among a large list of possibilities, Microsoft delivers ODBC and OLEDB for general access to data sources, as well as the SQL Server Native Client that can only be used for MS SQL Server databases and, so, it is highly optimized. ODBC (Open DataBase Connectivity): It is a call-level access (i.e. API functions) for C/C++ applications to varying data stores through ODBC drivers. ODBCconf.exe is a command-line utility for configuring drivers and data source names (DSNs). OLEDB (Ob ject Linking and Emb edding, DataBase): It is an ob ject-level access (i.e. set of COM-based interfaces) that expose data from a variety of sources through OLEDB providers to be accessed by C/C++ applications. These are the ones that are used by an ABAP stack. For more information on the available technologies, you can check http://msdn.microsoft.com/library/ee730344.aspx. 2.1. MDAC (Microsoft Data Access Components)[i] In order to connect their applications to a relational database, developers can use a variety of providers and drivers that are shipped by Microsoft, or by third parties. MDAC is one of these interfaces and it is part of the operating system. It implements OLEDB (CLSID_MSDASQL) and ODBC drivers for SQL Server. It requires a separate connection for each active select. The active select referred to here is the select using a client, or firehose cursor. We learned quite early that the normal select via server side cursor is quite expensive. So whenever possible we use the client side cursor method, which means that we just issue the select and process the rowset. The drawback of this really fast method used before MARS (check next paragraph), was that it blocked the socket (a socket defines the database connection through the network as a file descriptor defines the access to a local file) for all other operations until the rowset was read. So we mainly exploited this method for uncommitted reads, using before MARS multiple additional database connections. So an SAP connection consisted of N uncommitted read connections and one committed read connection (which handled the committed reads, the blocking reads, the modifications). In case of committed read we used the firehose/client side cursor method only for single selects or certain special cases. Nowadays, MARS (Multiple Active Result Sets) allows the handling of multiple rowsets in one database connection. So, now with MARS, the SAP connection consists of only two database connections: one for committed read (where we still use server side cursors) and one for uncommitted reads (where we handle parallelism by using MARS). 2.2. SNAC (SQL Server Native Client) SQL Server Native Client is a stand-alone data access application programming interface (API) that includes OLEDB (CLSID_SQLNCLI) and ODBC drivers. It was first shipped with SQL Server 2005 (SNAC 9.0). SNAC supports MARS (Multiple Active Row Sets) which allows a single connection to simultaneously support multiple active selects. The SNAC software is distributed by Microsoft with the SQL Server 2005 and later versions as the file sqlncli.msi. You should look fora version suitable for your hardware platform and install it on your application server. It is important that you install the SNAC 2005 SP1 or later (check note 960985). You should also make sure that you install in all the SAP application servers the SNAC version that matches your SQL Server version (or a later one, according to note 1082356). If this is not done, unexpected issues can take place. Older releases of the ABAP/DBSL interface use OLEDB. DBSL 7.00/7.01 implements both the older OLEDB and the newer ODBC version. DBSL 7.10 and later implements only the ODBC version. Exception: special 7.10 and later DBSL DLLs are available for use with SQL 2000 where supported (dbmssslib_oledb.dll). This is mainly because SQL Server 2000 does not support MARS, and the ODBC DBSL requires MARS. The ODBC DBSL will always try to use the latest available SNAC ODBC driver. The SNAC ODBC driver is implemented by SQLNCLI*.DLL. Microsoft guarantees that newer versions are backward compatible with previous server versions.
scn.sap.com/community/sqlserver/blog
14/16
5/11/13
5. References
SAP note 400818 - Information about the R/3 Database Library SAP Note 323151 - Several DB connections with Native SQL SAP Note 178949 - MSSQL: Database MultiConnect with EXEC SQL SAP Note 734034 - Native OLEDB provider SQLNCLI SAP Note 738371 - Creating DBCON multiconnect entries for SQL Server SAP Note 960985 - existing Stored Procedure erroneously considered as missing SAP Note 1082356 - Using the ODBC based DBSL for Microsoft SQL Server SAP Note 1238905 - Connection is busy with results for another command SAP Note 1248222 - ODBC DBSL profile parameters and connect options SAP Note 1263367 - Accept MDAC driver for DBCON SAP Note 1341097 - MSSQL: 720 DCK, 7.0* on SQL 2000, dbmssslib_oledb.dll SAP Note 1506487 - Error 3997 when executing native SQL SAP Note 1644499 - How to set up a connection to MS SQL Server from Linux SAP KBA 1544360 - SQL Error 402 during DB compression with report MSSCOMPRESS http://msdn.microsoft.com/library/ee730344.aspx http://msdn.microsoft.com/en-us/library/ms810810.aspx http://msdn.microsoft.com/en-us/library/ms131035.aspx http://help.sap.com/saphelp_nw04/helpdata/en/f3/914f3445194d468f652d45494230b1/content.htm [i] Starting with Windows Vista, the data access components are now called Windows Data Access Components, or Windows DAC
447 View s
1 Comments
The others
Posted by Lars Breddemann Oct 14, 2009
scn.sap.com/community/sqlserver/blog
15/16
5/11/13
Ok, I admit, I don't have a very good idea of MS SQL Server. I do Oracle and MaxDB - that pretty much is it. Of course, as a database support guy you always need to peek over the fence to the other DBMS (e.g. working on priority Very High messages during weekends) but this has nothing to do with gaining a certain level of real experience with the 'other' DBMS. Although my MS SQL colleague is usually sitting just on the opposite end of the desk usually everybody is busy enough with working on his/hers own stuff. But this makes me even more lucky to have found the following blog about SAP on MS SQL Server: Running SAP Applications on SQL Server So if you're in MS SQL Server you really want to pay this one a visit (as long as you return to good old Oracle and MaxDB afterwards ;-)). As far as I'm informed the blog is written by several authors, some of them working at SAP in Walldorf most of their time. Have fun reading! Lars
70 View s
0 Comments
Copyright
Follow SCN
scn.sap.com/community/sqlserver/blog
16/16