Testing the products of software development is, or should be, just as important as the development itself. Deploying a solution that does not work, or equally importantly, deploying a solution that worked in the development environment but for some reason, does not work in the production environment, is simply unacceptable.
Visual Studio 2010 comes with a number of tools that help the developer test their custom code before deploying to the production server.
First of all, since we’re talking about best practices, we should make special note of the fact that “testing” will not always mean the same procedures. In fact, the developer will be required to define up-front, what success will require. For example, success for a specific system/installation might be measures in successfully served requests per second. In another case, success may be the size of data the server can host and serve before visible degradation in speed occurs. There is a number of things that success can look like for any specific situation and the developer will need be able to describe the requirements of success before delving into any test scenarios.
Visual Studio 2010 comes with a number of test tools (depending on which version of the product you have purchased) that will help the developer test their system. In the ultimate version of VS you will find test tools for unit testing, code coverage, impact analysis, coded UI testing, web performance testing and load testing. You can make the jumps to read more about each of those tools. However, it is perhaps of exceeding importance to note how the coded UI testing performs. This tool will record the user’s actions on the user interface and will create code in VS to immitate those actions. This way, the developer can automate test without having to perform them manually on the user interface! Cool stuff 🙂
The Administration Toolkit and the Load Testing Kit (LTK)
Microsoft has already published the Administration toolkit, part of which, is the Load Testing Kit. The official technet webpage, with information on what is included and a link to the actual package is at the other end of the jump (in case you can’t bother to read through the technet introduction, make the jump to the download location). This Kit is a set of pre-built web and load tests that the developer can customise to meet his/her specific needs. Also, it includes a very handy tool, the Data Generation Utility, which will greatly speed up testing by creating usage pools that represent a subset of users with a particular load profile, users in Active Directory (AD), datasets on the farm, and the run-tables for use with Visual Studio. Testing on steroids! Furthermore, the LTK provides a Log Analysis utility which scans the IIS logs (on the production server) and creates test loads using VS based on those logs so that the developer will create real-life test scenarios.
Best Practices for Capacity and Load Testing
1. The developer should attempt to immitate the production environment as close as possible onto the development platform. Indeed, there are cases where the production environment is of a size that is simply impossible to recreate (unless of course the company does not care about doubling the budget for testing purposes but that is rarely the case). In such cases, the developer can still aim for an as close as possible recreation by setting up the Active Directory (AD) and hierarchies identically to that of the production environment, security groups, domain topology, etc. should all be as close as possible to the production environment.
2. There are some things that the developer may miss, simply because they are not always mentioned with SharePoint best practices. For example, the developer should also make sure that the best practices for the SQL Server are followed. An issue of great importance is that SharePoint is likewise setup to do maintenance jobs during the night. These jobs will recreate indexes and update statistics so that SharePoint can perform optimally. If the developer recreates (a subset of) the data from the production server to the development server, they need to either allow SQL to do the same operations on the test data during the night or, manually rebuild indexes and rerun statistics.
Side Note: There is an interesting option that comes with SQL Server -ever since the version 2000 I believe. This is the Autogrow option that many developers/administrations have had problems with, but which can still prove invaluable at times (this is probably why it is still there actually). The Autogrow option does exactly what the name suggests; turns the DB ever expanding. For SharePoint, this is not a very smart option. First of all, the Autogrow will grow your DB when it needs to, not when you want it to. Second, it will grow in unexpected ways, it will fragment your hard drive and eventually, it will slow your system down. Since SharePoint stores all of its data to SQL DBs, you really do not want that to happen. Pre-allocate all the size that you anticipate to require, so that it will be in a single chunk of hard disk instead of having to deal with fragmented DB files across fragmented chunks of one or more hard drives. In extreme cases only, you should consider allowing the SQL to autogrow in your production server. That is probably the same as saying, don’t ever do it.
3. There is an option in VS that you should turn to False: Parse Dependent Requests. Unless you set that option to off, you will likely get a largere Requests per Second (RPS) than the end-users will. That is only normal as VS will parse requests and resources dependent on those requests from the web front ends which however, cache these resources. The end-users might not have these resources cached so, you should test against that scenario.
4. Best practice for web tests require the developer to create ALL web tests as atomic operations instead of complete scenarios. The reasons behind that is that if you create a complete scenario, a single action of that scenario might take a long time (such as downloading a large document for example) to complete, which would wrongly give a false positive that the server is performing well, only because the server will starve for actions to perform (ie. it will not do anything but serve a single document during that time). Best practices require the developer to create single actions -atomic tests- and then weave those together into large scale scenarios. This way, a real-life test is much more possible.
5. The Load Testing Kit (LTK) includes utilities to parameterise SharePoint tests so that they will not fail because of encoded URL’s, GUIDs, etc. You should definitely use these unless you prefer to manually parameterise these factors in your tests.
6. Validation rules should be use with your SharePoint tests. There is in fact a couple of validation rules that the developer will definitely want to implement. SharePoint tests are likely to report false success when the fall onto the pages error.aspx and accessdenied.aspx, while running your tests. This is actually normal behaviour because these two are valid SharePoint pages and the tests will correctly report a success (since the pages were reached) but the developer will need to create validation rules to make sure that any test that reaches these pages, will be reported as failure. The test request has in fact errored and the developer should be aware of that instead of getting false success reports. The conscious developer will in fact create validation rules for all the webpages he/she expects the request to return. If for example the test request is expected to complete at the webpage targetReached.aspx, then a validation rule should be created to only report the test as successful if the specific webpage is reached.
Best Practice for Security Levels and Permissions testing
Ideally, the developer should always keep in mind that their code will be executed by end users who have a significantly smaller subset of permissions than their own. This is of much greater importance when we keep in mind that SharePoint 2010 has also introduced throttles to ensure server performance. Copying word-for-word from my post aboutSharePoint 2010 Enhancements for Stability, “probably the most important thing for the developer to note is that, throttles do not apply to the SharePoint Administrator! That means that when working in the development environment, where it is probable that the developer is also the SharePoint administrator, the developer will fail to notice that his/her code is hitting a throttle. Moving that code to the production environment will be catastrophic since users (who will obviously have a severely trimmed subset of permissions) will hit the throttles when running the code and receive errors from the system. The developer needs to make sure that he/she tests custom code with a number of user credentials, ideally, one for each security level permitted in the production environment!”
There is a number of third party tools that the developer can incorporate to get better testing results. A lot of them are mentioned in the wrox bibliography I have already proposed, but I should mention here the Visual Round Trip Analyser (VRTA).
Visual Round Trip Analyser (VRTA)
The Visual Round Trip Analyser, although mentioned with the rest of the third -party tools, is in fact a tool from Microsoft. A free one too. This utility will sit on top of the Microsoft Network Monitor from Microsoft and visualise how long it takes for the client to communicate with the server. This is very helpful for the developer as the information provided by the Visual Round Trip Analyser can be used to understand if there are excessive round trips, if the custom code is slowing down the pages or if there are network issues causing any problems between the custom application and the server.
Powered by Zoundry Raven
Powered by Zoundry Raven