Recently  Steve Malmgren the  ”Chief  Architect” of MAS 90 and 200 released a blog describing the  key differences between the old MAS200 SQL version  and the New MAS200 4.45 version.   This new SQL version is due out November 2010.

Here are the highlights:

I want to  shed some light on the key differences between the 3.7x SQL product and the new Sage ERP MAS 200 SQL version 4.45, releasing to market in November 2010.

The bottom line is 4.45 has achieved a level of performance comparable to the 4.40 Providex version, especially for data entry and maintenance screens.  For an average size company data set, reports, registers and updates will be close to Providex, and do not exhibit the type of degradation experienced in the 3.7x product.  Ad-hoc reporting speed via Business Insights Explorer and Crystal Reports is greatly improved over the Providex ODBC driver.


Feedback on performance of the product from partners involved in the beta process has been very positive.  Also, even before Beta began, we hosted a preview party where we invited several partners to Irvine to test drive 4.45 and we received positive feedback on performance and many other aspects of the new SQL product.  Check out this link to a blog entry from Doug Deane at DSD that discusses that experience.


Now let’s talk about what’s under the hood and why this is the case…

One of the primary architectural goals of the Business Framework was to be able to support a much better implementation on SQL Server.  This was primarily achieved in two areas.

Schema:

A fundamental change that needed to occur was at the schema level and how the MAS 200 programs access and interact with that data.  In the 3.7x version, the schema inside the SQL Server database tables was flattened out into discrete columns, however the MAS 200 program logic was designed to reference a single large string with position offsets for each individual column (remember the Offset and Length from the TRSG?).  What this meant is every time a row was read from SQL Server, the Data Access Layer needed to take the discrete columns from the table and concatenate them together into one big string that the programs could interact with.  Then when the row was written back out, the string had to be split back out to the individual columns in the SQL table.

Another issue that compounded this problem was the variant (or multiple) record types defined in the system.  An example of this is the 3.7x, SO2 table, which was the detail table for Sales Orders, where based on the line type (regular item, misc item, special, comment, etc.) different sets of columns from the SQL table were needed to map into the large string and then split back out on the write.

Finally the 3.7x detail tables were stored in a linked list sequential access table.  Not only did these tables cause issues for custom Crystal reports and trying to have the detail lines report in the correct order, they were also a source of initial deadlock issues and cross-linked orders.

All of these issues are eliminated in version 4.45.  All programs interact directly with the discrete columns that are in the SQL tables and there are no more variant record types, eliminating the back and forth mapping between the application and the database, improving overall performance of the system.  No more linked list sequential access tables for transaction detail entry tables, improves the overall integrity of the system.



ADO Data Access Layer:

The biggest difference in regards to performance between the 3.7x version and 4.45 is in the streamlined Data Access Layer using ADO to request data from SQL.  One of the largest performance bottlenecks in the 3.7x version was in how data was requested from SQL Server and some basic assumptions of how that data would be used within the application.  In 3.7x when data was requested a Dynamic Cursor was generated so that rows could be fetched as needed to be processed.  Important to note is that this cursor contains all rows from the starting access point to the end of the table based on the index being used.  If the process needed to consume all of the rows in the cursor then this is relatively efficient.  If however, the assumption, that all rows are needed, is incorrect then performance problems surface.  This is because the cursor is discarded and potentially rebuilt many times during the process.  This becomes particularly evident with large data sets.  Furthermore, even if all rows were to be processed any updates or deletes issued against those rows would cause the cursor to be discarded and recreated, causing more performance issues.

In 4.45, we have taken several steps to alleviate these problems.

  • The new development coding standards present in the Business Framework modules support the ability to only request the rows that are needed, which eliminates the need to generate a cursor to the end of the table.
  • Dynamic Cursors are no longer used, replaced with the fastest possible Read Only/Forward Only cursor type.
  • When processing sets of data such as updating or deleting multiple rows, these operations are executed on a separate statement, meaning the cursor remains open and is not discarded and recreated multiple times.

Other significant performance gains with the ADO Data Access Layer:

  • Three types of data caching to avoid multiple requests to SQL Server for the same data within a process.  This lowers CPU usage on the server by reducing the number of queries parsed, as well as reduces network traffic by eliminating repeated requests.
  • Parameterized Queries for frequently issued queries, reducing CPU usage on the server by allowing SQL Server to reuse the most effective query plan on multiple requests.
  • Update statements only update changed columns, as compared to updating every column in the table regardless of whether it was changed as was the case in the 3.7x product.
  • Efficient retrieval of related data during report process, greatly reducing the number of queries issued to SQL server.
  • Bulk insert for creating report worktables, for extremely fast inserts.

I hope this inspires you to check it out for yourself and gives you a good feel for the significant improvements that we have made in the approach and implementation of Sage ERP MAS 200 SQL.  You may also be interested in my previous blog entry on Sage ERP MAS 200 SQL if you haven’t seen it already.

Thank you for your time.

Steve Malmgren

Chief Architect, MAS 90 and 200


Jim Woodhead

DSD Business Systems

Jim Woodhead is Vice President of Professional Services of DSD Business Systems, a national provider of on-demand (cloud) and on-premises ERP and CRM software, specializing in wholesale distribution, manufacturing, warehouse management, inventory, business intelligence and eCommerce software. DSD offers Sage 100 (formerly MAS 90), Sage 300 (formerly Accpac), Sage 500 (formerly MAS 500), NetSuite, Sage FAS, Sage HRMS (formerly Abra), Sage CRM, Sage SalesLogix, Extended Solutions, and Custom Programming.
Jwoodhead
View all posts by Jwoodhead
Jims website
Share
© 2010 DSDtips.com Suffusion WordPress theme by Sayontan Sinha