Sunday, January 24, 2010

The Most Important WCF Class

Even fairly experienced WCF programmers may never have had to deal with this little gem, however, it is the single most important class in the WCF framework. Why is that? Because the message class provides the fundamental abstraction which represents all data sent or received from any WCF endpoint.

Key Parts

There are four key parts to the message class:

  • Version - The version property contains information about the SOAP and Addressing versions used by the message. WCF fully supports use of non-SOAP messages for which this property is will be None to indicate the lack of any special formatting.
  • Properties - Message properties contain processing information about the message which will not be written to the underlying transport stream as part of the message content. One example of what message properties can be used for is controlling things like HTTP verbs and status codes.
  • Headers - A collection of message headers. A few common headers such as To and Action can be accessed via shortcut properties in the headers collection.
  • Body - You can’t actually access the message body because the Message class abstraction is mean to be used as if the content of the Message was a stream.

It’s Abstract

This will come as a surprise to many people, because the WCF framework does a very good job of hiding this fact, but the Message class itself is an abstract class. So put away any preconceived notions you might have had about how WCF at it’s core can only be used for web services and XML based communication. The fact is that you could create a message class backed directly by a raw binary array if you didn’t want any kind of performance hit for loading the data… as a matter of fact you probably wouldn’t want to even waste your time doing that, because WCF does exactly that when you are using many of the default settings! Actually… it does it even more efficiently than you would probably do it on your own by taking advantage of buffer managers to reduce the number of allocations that need to be made as messages are being created and disposed. So, despite the fact that the underlying abstraction is streaming, WCF actually buffers messages into a byte array by default to allow for higher performance.

It’s Streaming

Once you read the message, it cannot be read again. For this reason, all custom message classes must also provide a MessageBuffer implementation. I won’t talk much about the MessageBuffer class at this point, other than to say that it is also abstract and it is responsible for creating additional “copies” of the message. I say “copies” because the default implementations that are most commonly used by the standard bindings won’t actually “copy” anything. They just return another instance backed by the same internal byte array. This makes for ultra efficient message copying, but is only possible because of another trait of the message… The reason Message implementers must provide their own MessageBuffer implementations is so that the copy operation itself can be as optimized as possible.

It’s Immutable

Well… sort of. The properties and headers of the message can be modified at will, making things like routing logic and header processing much easier to deal with, but the message body itself cannot be modified using the Message or MessageBuffer classes. For this reason, many hooks in the WCF internals where a Message might need to by modified will pass the message as a ref parameter so it can be swapped for another message.

It’s Not XML

The last basic thing you need to know about the Message class which I already touched on, but should make clear again… the Message class does have to not represent XML data! This may seem like a strange statement if you’ve looked casually at the methods provided by the message class. However, this is only due to the unfortunate naming that the WCF team chose for XmlDictionaryReader and XmlDictionaryWriter, which have a lot of methods for reading and writing XML data, but fully support reading and writing raw binary content as well.

Thursday, January 21, 2010

ADO.NET

Most applications need data access at one point of time making it a crucial component when working with applications. Data access is making the application interact with a Database, where all the data is stored. Different applications have different requirements for database access. VB .NET uses ADO .NET (Active X Data Object) as it's data access and manipulation protocol which also enables us to work with data on the . Let's take a look why ADO .NET came into picture replacing ADO.

Evolution of ADO.NET

The first data access model, DAO (data access model) was created for local databases with the built-in Jet engine which had performance and functionality issues. Next came RDO (Remote Data Object) and ADO (Active Data Object) which were designed for Client Server architectures but, soon ADO took over RDO. ADO was a good architecture but as the language changes so is the technology. With ADO, all the data is contained in a recordset object which had problems when implemented on the network and penetrating firewall. ADO was a connected data access, which means that when a connection to the database is established the connection remains open until the application is closed. Leaving the connection open for the lifetime of the application raises concerns about database security and network traffic. Also, as databases are becoming increasingly important and as they are serving more people, a connected data access model makes us think about its productivity. For example, an application with connected data access may do well when connected to two clients, the same may do poorly when connected to 10 and might be unusable when connected to 100 or more. Also, open database connections use system resources to a maximum extent making the system performance less effective.

Why ADO.NET?

To cope up with some of the problems mentioned above, ADO .NET came into existence. ADO .NET addresses the above mentioned problems by maintaining a disconnected database access model which means, when an application interacts with the database, the connection is opened to serve the request of the application and is closed as soon as the request is completed. Likewise, if a database is Updated, the connection is opened long enough to complete the Update operation and is closed. By keeping connections open for only a minimum period of time, ADO .NET conserves system resources and provides maximum security for databases and also has less impact on system performance. Also, ADO .NET when interacting with the database uses and converts all the data into XML XML Format for database related operations making them more efficient.


The ADO.NET Data Architecture

Data Access in ADO.NET relies on two components: DataSet and Data Provider.

DataSet

The dataset is a disconnected, in-memory representation of data. It can be considered as a local copy of the relevant portions of the database. The DataSet is persisted in memory and the data in it can be manipulated and updated independent of the database. When the use of this DataSet is finished, changes can be made back to the central database for updating. The data in DataSet can be loaded from any valid data source like Microsoft SQL Server database, an Oracle database or from a Microsoft Access database.

Data Provider

The Data Provider is responsible for providing and maintaining the connection to the database. A DataProvider is a set of related components that work together to provide data in an efficient and performance driven manner. The .NET Framework currently comes with two DataProviders: the SQL Data Provider which is designed only to work with Microsoft's SQL Server 7.0 or later and the OleDb DataProvider which allows us to connect to other types of databases like Access and Oracle. Each DataProvider consists of the following component classes:

The Connection object which provides a connection to the database
The Command object which is used to execute a command
The DataReader object which provides a forward-only, read only, connected recordset
The DataAdapter object which populates a disconnected DataSet with data and performs update


Data access with ADO.NET can be summarized as follows:

A connection object establishes the connection for the application with the database. The command object provides direct execution of the command to the database. If the command returns more than a single value, the command object returns a DataReader to provide the data. Alternatively, the DataAdapter can be used to fill the Dataset object. The database can be updated using the command object or the DataAdapter.

ADO .NET Data Architecture

Component classes that make up the Data Providers

The Connection Object

The Connection object creates the connection to the database. Microsoft Visual Studio .NET provides two types of Connection classes: the SqlConnection object, which is designed specifically to connect to Microsoft SQL Server 7.0 or later, and the OleDbConnection object, which can provide connections to a wide range of database types like Microsoft Access and Oracle. The Connection object contains all of the information required to open a connection to the database.

The Command Object

The Command object is represented by two corresponding classes: SqlCommand and OleDbCommand. Command objects are used to execute commands to a database across a data connection. The Command objects can be used to execute stored procedures on the database, SQL commands, or return complete tables directly. Command objects provide three methods that are used to execute commands on the database:

ExecuteNonQuery: Executes commands that have no return values such as INSERT, UPDATE or DELETE
ExecuteScalar: Returns a single value from a database query
ExecuteReader: Returns a result set by way of a DataReader object


The DataReader Object

The DataReader object provides a forward-only, read-only, connected stream recordset from a database. Unlike other components of the Data Provider, DataReader objects cannot be directly instantiated. Rather, the DataReader is returned as the result of the Command object's ExecuteReader method. The SqlCommand.ExecuteReader method returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns an OleDbDataReader object. The DataReader can provide rows of data directly to application logic when you do not need to keep the data cached in memory. Because only one row is in memory at a time, the DataReader provides the lowest overhead in terms of system performance but requires the exclusive use of an open Connection object for the lifetime of the DataReader.

The DataAdapter Object

The DataAdapter is the class at the core of ADO .NET's disconnected data access. It is essentially the middleman facilitating all communication between the database and a DataSet. The DataAdapter is used either to fill a DataTable or DataSet with data from the database with it's Fill method. After the memory-resident data has been manipulated, the DataAdapter can commit the changes to the database by calling the Update method. The DataAdapter provides four properties that represent database commands:

SelectCommand
InsertCommand
DeleteCommand
UpdateCommand

When the Update method is called, changes in the DataSet are copied back to the database and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is executed.

.Net Framework

1. What is .NET?
.NET - is the Microsoft Web services strategy to connect information, people, systems, and devices through software. Integrated across the Microsoft platform, .NET technology provides the ability to quickly build, deploy, manage, and use connected, security-enhanced solutions with Web services. .NET-connected solutions enable businesses to integrate their systems more rapidly and in a more agile manner and help them realize the promise of information anytime, anywhere, on any device. Reference


2. Which versions of .NET have been released so far?

The final version of the .NET 1.0 SDK & runtime were made publically available on 15 - Jan -2002. At the same time, the final version of Visual Studio.NET was made available to MSDN subscribers.

.NET 1.1 was released in April 2003, with bugs fixed. Visual Studio 2003 supports development of applications in version 1.1.

.NET 2.0 was launched in October 2005 for MSDN subscribers, and officially released in Nov 2005.

On - Jun - 2006, .NET 3.0 was launched. This version was earlier called WinFX. Visual Studio 2005 supports development of .NET 2.0 and .NET 3.0 applications. .NET 3 is comprised of the following:

Windows Communication Foundation
Windows Presentation Foundation
Windows Workflow Foundation
Windows Cardspace

The next version of Visual Studio, code named Orcas, Beta 1 has been released as a Beta Version. The .NET Framework 3.5 Beta is also available as a download.


3. Which versions of .NET have been released so far?

The final version of the .NET 1.0 SDK & runtime were made publically available on 15 - Jan -2002. At the same time, the final version of Visual Studio.NET was made available to MSDN subscribers.

.NET 1.1 was released in April 2003, with bugs fixed. Visual Studio 2003 supports development of applications in version 1.1.

.NET 2.0 was launched in October 2005 for MSDN subscribers, and officially released in Nov 2005.

On - Jun - 2006, .NET 3.0 was launched. This version was earlier called WinFX. Visual Studio 2005 supports development of .NET 2.0 and .NET 3.0 applications. .NET 3 is comprised of the following:

Windows Communication Foundation
Windows Presentation Foundation
Windows Workflow Foundation
Windows Cardspace

The next version of Visual Studio, code named Orcas, Beta 1 has been released as a Beta Version. The .NET Framework 3.5 Beta is also available as a download.

4. Which tools can be used for .NET Development?

The .NET Framework SDK is free and includes command-line compilers for C++, C#, and VB.NET and various other utilities to aid development.

SharpDevelop is a free IDE for C# and VB.NET.

Microsoft Visual Studio Express editions are cut-down versions of Visual Studio, for hobbyist or novice developers and are available for FREE Download at Microsoft site. Note that .NET 2.0 Framework gets downloaded along with Visual Studio Express & All versions above Visual Studio Express. Download Visual Studio Express

There are different versions for C#, VB, web development etc. Microsoft Visual Studio Standard 2005 is around $300, or $200 for the upgrade.

Microsoft VIsual Studio Professional 2005 is around $800, or $550 for the upgrade. At the top end of the price range are the Microsoft Visual Studio Team Edition for Software Developers 2005 with MSDN Premium and Team Suite editions. Visual Web Developer Express is available as a free download.

The next version of Visual Studio, code named Orcas, Beta 1 has been released as a Beta Version and is available for download. Download Orcas


5. Explain CLI, CIL, CTS, Metadata, CLS, IL and VES in .NET

CLI - Common Language Infrastructure. Microsoft has a piece of shared source, its the public implementation of ECMA Common Language Infrastructure. This shared code is code-named "Rotor". It has around 3 million lines of code. Those who are interesed in development of a language that targets the .NET Framework, may extensively make use of this CLI. The following topics are covered in the Shared Source CLI :

* The CLI type system
* Component packing & assemblies
* Type Loading & JIT Compilatino
* Managed code & Execution Engine (CLR)
* Description of Garbage Collection process & memory management
* The Platform Adaptation Layer (PAL): a portability layer for Win32®, Mac OS® X, and FreeBSD

Its been written by the Microsoft Team that has developed the .NET Framework.

Note: A compiled managed assembly is comprised of IL, Metadata and Manifest.

CIL Stands for Common Intermediate Language. Its actually a low level human readable language implementation of CLI. All .NET-aware languages compile the source oode to an intermediate language called Common Intermediate Language using the language specific compiler. It is also possible to build .NET assemblies direclty using CIL using the ilasm.exe compiler. This compiler is shipped along with the .NET Framework 2.0 SDK. CIL is the only language that allows access to each aspect of the CTS. CIL is the definition of the fundamentals of the .NET framework.

CTS - stands for Common Type Specification. It is at the core of .NET Framework's cross-language integration, type safety, and high-performance code execution. It defines a common set of types that can be used with many different language syntaxes. Each language (C#, VB.NET, Managed C++, and so on) is free to define any syntax it wishes, but if that language is built on the CLR, it will use at least some of the types defined by the CTS.

Metadata - is code that describes the compiled IL. A .NET language compiler will generate the metadata and store this in the assembly containing the CIL. Metadata describes all class members and classes that are defined in the assembly, and the classes and class members that the current assembly will call from another assembly. The metadata for a method contains the complete description of the method, including the class (and the assembly that contains the class), the return type and all of the method parameters. When the CLR executes CIL it will check to make sure that the metadata of the called method is the same as the metadata that is stored in the calling method. This ensures that a method can only be called with exactly the correct number of parameters and exactly the correct parameter types.

CLS - Common Language Specification. A type that is CLS compliant, may be used across any .NET language. CLS is a set of language rules that defines language standards for a .NET language and types declared in it. While declaring a new type, if we make use of the [CLSCompliant] attribute, the type is forced to conform to the rules of CLS.

IL - Intermediate Language, is the compiled form of the .NET language source code. When .NET source code is compiled by the language specific compiler (say we compile C# code using csc.exe), it is compiled to a .NET binary, which is platform independent, and is called Intermediate Language code. The .NET binary also comprises of metadata.

Its important to note here that metadata describes the IL, whereas manifest describes the assembly.

VES - Virtual Execution System. The Virtual Execution System(VES) provides an environment for executing managed code. It provides direct support for a set of built-in data types, defines a hypothetical machine with an associated machine model and state, a set of control flow constructs, and an exception handling model.To a large extent, the purpose of the VES is to provide the support required to execute the Common Intermediate Language instruction set.


6. What is CLR in .NET?

Common Language Runtime - It is the implementation of CLI. The core runtime engine in the Microsoft .NET Framework for executing applications. The common language runtime supplies managed code with services such as cross-language integration, code access security, object lifetime management, resouce management, type safety, pre-emptive threading, metadata services (type reflection), and debugging and profiling support. The ASP.NET Framework and Internet Explorer are examples of hosting CLR.

The CLR is a multi-language execution environment. There are currently over 15 compilers being built by Microsoft and other companies that produce code that will execute in the CLR.

The CLR is described as the "execution engine" of .NET. It's this CLR that manages the execution of programs. It provides the environment within which the programs run. The software version of .NET is actually the CLR version.

When the .NET program is compiled, the output of the compiler is not an executable file but a file that contains a special type of code called the Microsoft Intermediate Language (MSIL, now called CIL, Common Intermediate Language). This MSIL defines a set of portable instructions that are independent of any specific CPU. It's the job of the CLR to translate this Intermediate code into a executable code when the program is executed making the program to run in any environment for which the CLR is implemented. And that's how the .NET Framework achieves Portability. This MSIL is turned into executable code using a JIT (Just In Time) complier. The process goes like this, when .NET programs are executed, the CLR activates the JIT complier. The JIT complier converts MSIL into native code on a demand basis as each part of the program is needed. Thus the program executes as a native code even though it is compiled into MSIL making the program to run as fast as it would if it is compiled to native code but achieves the portability benefits of MSIL.

7. What is a Class Library in .NET?

Class library is the another major entity of the .NET Framework. This library gives the program access to runtime environment. The class library consists of lots of prewritten code that all the applications created in .NET aware languages and Visual Studio .NET will use. The code for all the elements like forms, controls and the rest in VB .NET applications actually comes from the class library.

Code in class libraries may be shared & reused. One of the core . NET libraries is mscorlib.dll. .NET language compilers reference this library automatically as it contains core types. A class library, contains types, that may be used by external applications. A class library may be a DLL or an EXE. Note that the .NET class libraries, even though have a same extension as the old COM Win32 binaries, yet they are very different internally.


8. Explain Managed code, managed class and managed data in .NET

Managed Code - The .NET framework provides lots of core runtime services to the programs that run within it. For example - security & exception handling. Such a code has a minimum level of information. It has metadata associated with it. Such a code is called Managed Code. VB.NET, C#, JS.NET code is managed by default. In order to make C++ code managed, we make use of managed extensions, which is nothing but a postfix _gc after the class name.

Managed Data - Data that is allocated & freed by the .NET runtime's Garbage collecter.

Managed Class - A class whose objects are managed by the CLR's garbage collector. In VC++.NET, classes are not managed. However, they can be managed using managed extentions. This is done using an _gc postfix. A managed C++ class can inherit from VB.NET classes, C# classes, JS.NET classes. A managed class can inherit from only one class. .NET does'nt allow multiple inheritance in managed classes.


9. What is an assembly in .NET? What is ILDASM?

Assembly - An assembly may be an exe, a dll, an application having an entry point, or a library. It may consist of one or more files. It represents a group of resources, type definitions, and implementation of these types. They may contain references to other assemblies. These resources, types & references are compacted in a block of data called manifest. The manifest is a part of the assembly, which makes it self-describing. Assemblies also increase security of code in .NET. An assembly maybe shared(public) or private. The assembly, overall comprises of 3 entities: IL, Manifest, Metadata. Metadata describes IL, whereas Manifest describes the assembly.

An assembly may be created by building the class(the .vb or .cs file), thereby producing its DLL.

ILDASM - The contents of an assembly may be viewed using the ILDASM tool, that comes with the .NET SDK or the Visual Studio.NET. The ildasm.exe tool may also be used in the command line compiler.


10. What is Reflection in .NET?

Reflection - The process of getting the metadata from modules/assemblies. When .NET code is compiled, metadata about the types defined in the modules is produced. These modules are in turn packaged as assemblied. The process of accessing this metadata in called Reflection.

The namespace System.Reflection contains classes that can be used for interrogating the types for a module/assembly. We use reflection for examining data type sizes for marshalling across process & machine boundaries.

Reflection is also used for:

1) To dynamically invoke methods (using System.Type.InvokeMember)
2) To dynamically create types at runtime (using System.Reflection.Emit.TypeBuilder).


11. What are the different types of assemblies in .NET?

An assembly may be Public or Private. A public assembly is also called a Shared Assembly.

A Satellite Assembly - is an assembly that contains only resources, and no code. The resources are location specific. A satellite assembly is associated with a main assembly, the one that actually contains the code.

11a. What is the difference between a Public Assembly and a Private Assembly?

An assembly is the basic building block in .NET. It is the compiled format of a class, that contains Metadata, Manisfest & Intermediate Language code.

An assembly may be either Public or Private. A public assembly means the same as Shared Assembly.

Private Assembly - This type of assembly is used by a single application. It is stored in the application's directory or the applications sub-directory. There is no version constraint in a private assembly.

Shared Assembly or Public Assembly - A shared assembly has version constraint. It is stored in the Global Assembly Cache (GAC). GAC is a repository of shared assemblies maintained by the .NET runtime. It is located at C:\Windows\Assembly OR C:\Winnt\Assembly. The shared assemblies may be used by many applications. To make an assembly a shared assembly, it has to be strongly named. In order to share an assembly with many applications, it must have a strong name.

A Strong Name assembly is an assembly that has its own identity, through its version and uniqueness.

In order to convert a private assembly to a shared assembly, i.e. to create a strongly named assembly, follow the steps below...

1) Create a strong key using the sn.exe tool. This is used to created a cryptographic key pair. The key pair that is generated by the Strong Name tool can be kept in a file or we can store it our your local machine's Crytographic Service Provider (CSP). For this, goto the .NET command interpreter, and type the following...

sn -k C:\samplekey.snk

This will create a strong key and save it to the location C:\samplekey.snk 2) If the key is stored in a file, just like we have done above, we use the attribute AssemblyKeyFileAttribute. This belongs to the namespace System.Reflection.AssemblyKeyFileAttribute. If the key was in the CSP, we would make use of System.Reflection.AssemblyKeyNameAttribute.

Go to the assemblyinfo.vb file of your project. Open this file. Make the following changes in this file...



We may write this in our code as well, like this...

Imports System.Reflection

Namespace StrongName
Public class Sample
End Class
End Namespace

3) Build your project. Your assembly is now strongly named.
Installing the Shared assembly in GAC...
Go to .NET command interpreter, use the tool gacutil.exe
Type the following...
gacutil /i sampleclass.dll
To uninstall it, use... gacutil /u sampleclass.dll. Visual Studio.NET provides a GUI tool for viewing all shared assemblies in the GAC.

How To Set Elements Of An Array Of A Private Type Using Visual Studio Shadows

Visual Studio uses Publicize to create accessors public for private members and types of a type.

But when you try to set elements of a private array of elements of a private type, things get complicated.

Imagine this hypothetic class to test:

public static class MyClass
{
    private static readonly MyInnerClass[] myArray = new MyInnerClass[10];
 
    public static bool IsEmpty()
    {
        foreach (var item in myArray)
        {
            if ((item != null) && (!string.IsNullOrEmpty(item.Field)))
            {
                return false;
            }
        }
 
        return true;
    }
 
    private class MyInnerClass
    {
        public string Field;
    }
}

If I want to write a test for the case when the array has “non empty” entries, I need to setup the array first.

Using the accessors generated by Visual Studio, I would write something like this:

[TestClass()]
public class MyClassTest
{
    [TestMethod()]
    public void IsEmpty_NotEmpty_ReturnsFalse()
    {
        for (int i = 0; i <>
        {
            MyClass_Accessor.myArray[i] = new MyClass_Accessor.MyInnerClass 

{ Field = i.ToString() };
        }
 
        bool expected = false;
        bool actual;
 
        actual = MyClass.IsEmpty();
 
        Assert.AreEqual(expected, actual);
    }
}

But the test will fail because, although the elements of

the private array myArray can be read as MyClass_Accessor.MyInnerClass instances,

they can’t be written as such.

To do so, the test would have to be written like this:

[TestClass()]
public class MyClassTest
{
    [TestMethod()]
    public void IsEmpty_NotEmpty_ReturnsFalse()
    {
        for (int i = 0; i <>
        {
            MyClass_Accessor.ShadowedType.SetStaticArrayElement

("myArray", new MyClass_Accessor.MyInnerClass { Field = i.ToString() }.Target, i);
        }
 
        bool expected = false;
        bool actual;
 
        actual = MyClass.IsEmpty();
 
        Assert.AreEqual(expected, actual);
    }
}

But, this way, we loose all the strong typing of the accessors because we need to

write the name of the array field.

Because the accessor for the field is a property, we could write a set of

extension methods that take care of getting the field name for us. Something

like this:

public static class PrivateypeExtensions
{
    public static void SetStaticArrayElement(this PrivateType self,

Expression<Func> expression, T value, params int[] indices)
    {
        object elementValue = (value is BaseShadow) ? (value as BaseShadow).Target : value;
 
        self.SetStaticArrayElement(
            ((PropertyInfo)((MemberExpression)(expression.Body)).Member).Name,
            elementValue,
            indices);
    }
 
    public static void SetStaticArrayElement(this PrivateType self, Expression<Func>

expression, BindingFlags invokeAttr, T value, params int[] indices)
    {
        object elementValue = (value is BaseShadow) ? (value as BaseShadow).Target : value;
 
        self.SetStaticArrayElement(
            ((PropertyInfo)((MemberExpression)(expression.Body)).Member).Name,
            invokeAttr,
            elementValue,
            indices);
    }
}

Now, we can write the test like this:

[TestClass()]
public class MyClassTest
{
    [TestMethod()]
    public void IsEmpty_NotEmpty_ReturnsFalse()
    {
        for (int i = 0; i <>
        {
            MyClass_Accessor.ShadowedType.SetStaticArrayElement(() => 

MyClass_Accessor
.myArray, new MyClass_Accessor.MyInnerClass { Field = i.ToString() }, i);
        }
 
        bool expected = false;
        bool actual;
 
        actual = MyClass.IsEmpty();
 
        Assert.AreEqual(expected, actual);
    }
}

It’s not the same as the first form, but it’s strongly typed and we’ll get a compiler

error instead of a test run error if we change the name of the myArray field.

Wednesday, January 20, 2010

How to Create a Web Server Farm Cache

Introduction

Dynamic database-driven web sites are required if your site has large quantities of data to upload, manage, and serve back to your customers. However, IT departments typically say this creates too big a load for the database -- hence many people keep the files on the hard drive along with their script pages. Not only does this not scale well when you expand from one web server to a web farm, it is hard to back-up, and the file system makes for a poor database -- lacking the transactional qualities of a database.

You can keep your files and images in the database if you take advantage of client-side caching and implement a web server file cache. The benefits include:

  • All your content in one place: textual, images, and relationships are in the database.
  • The ability to deploy the files to multiple web servers, without additional replication work.
  • The ability to backup all the content in a single database backup.
  • Minimum database reads for files, on low-write / high-read systems.

This article will show a simple way to deploy caching so that you can keep all your content in the database efficiently.

Associated Article

This article is provided in conjunction with another article about streaming uploaded files. The caching article's portion of this web site is in the Cache.aspx file. The code for the upload portion of that web site and the associated file is called UploadBlob.aspx.

Requirements

You need an IIS web server and a SQL server. The SQL tables, and stored procedures are provided in the Upload.sql file. You will also need files to upload. The /images directory of the associated web site has three images that you can use. A small view of each of the three images is below.

The SQL statement for creating the table is:

CREATE TABLE [dbo].[UploadBytesOfBlob](
[FileGuid] [uniqueidentifier] NOT NULL,
[Name] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[Content] [varbinary](max) NULL,
[ContentType] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[Size] [numeric](18, 0) NULL,
[NumberOfChunks] [numeric](18, 0) NULL,
[FirstModifiedDate] [datetime] NULL,
[LastModifiedDate] [datetime] NULL,
CONSTRAINT [PK_UploadBytesOfBlob] PRIMARY KEY CLUSTERED
(
[FileGuid] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]

The primary key for the table in the sql database is called FileGuid and is of type UniqueIdentifier. This is a Guid in terms of asp.net.

Compile and Run the Site

Download, compile, and run the site. Make sure to create the SQL table and stored procedures using the Upload.sql file. You will be presented with the Default.aspx file which has a link to the UploadBlob.aspx page. Upload the ascent.jpg file found in the /images directory. After the upload, you will be redirected to the default.aspx and a GridView will show the image information. Do not click on either link just yet. Open a windows explorer and check that the file guid listed in the default.aspx is not found in the /CacheFolder of the web site. The folder should be empty. When the image is requested, the folder will have a file name guid.dat where the actual database guid is the prefix of the filename.

How the Cache System Works

For the rest of this article we will use the following web site idea. Assume we have an editor who uploads images as part of his work. The image is large and must eventually be loaded onto 12 web servers. The image may change from time to time so that the relevance of the image is current. When the image does change, it is important that the latest version of the image is used and not the image from the client-side cache or the web-server cache.

A customer of the web site then requests the page which includes the image. What happens? For the purposes of explaining the system, there will be several requests.

The First Request

On the very first request, the image is in the database but not in any cache. This system checks the client-side cache for the image including the relevant date of the image using the If-Modified-Since request header. After the file is not found there, since it is the first request, the system checks the web server file cache including the relevant date of the image. The file is not found there, so the last step is to grab the image from the database. Since the web server is grabbing the file, the file is placed in the web's cache folder (our own system, not the IIS cache), and returned to the client. As part of the brower's work to handle the image coming down, it is placed in the client-side cache.

The Second Request

The customer comes back a second time to request the file while the file on the client-side cache is still the most current version. The cache system responds with a 304 and the client's system fetches the image locally. This has saved a trip to the database and it has also saved a trip to the web server's file system.

The Content has Changed

Imagine the editor has now uploaded a new version of the image into the database. All twelve web servers need to be updated with this new version. Instead of uploading the file to all twelve servers at the time of the change, the change is made to each web server as the image is requested. This means that the work of moving the file from the database to the web server is only done when necessary and is based on a client-request. This saves the IT department from having to deal with replication of the content across the web servers.

The Third Request

On the next request, the file in the client's local cache is no longer current. The cache system checks the web server cache to see if that version is the most current. If the web server cache folder does have the most current, then that version is returned to the client. If that web server cache folder version is not the most current, the current image is fetched from the database, reloaded into the web server cache by writing the database content to a file, then returned to the client and placed in the client cache.

The URL for the system looks like cache.aspx?FileGuid=XYZ... The FileGuid is used instead of a name or incremental number for the content so that the database can grow to a large size while still managing the content.

The image URL on the page then looks like .

The Sample Web Site

The sample web site has three main files: default.aspx, cache.aspx, and uploadblob.aspx. Download, compile and run the web site. Make sure to create the SQL table and stored procedures using the Upload.sql file. Put a break point at the top of the Page_Load method of the cache.aspx.cs file. This is the area of code this article focuses on.

The first time you load the file, nothing will appear except a link to the upload page on the default.aspx. As you upload images, this page will show a GridView listing those images.

Click on the Upload File link, and you will go to the upload page. Once you upload the ascent.jpg image, you will be redirected to the default.aspx file which will now show a GridView with a row for the ascent.jpg file:

Set a break point in the cache.aspx at the top of the Page_Load function. Now click on the link in the FileGuid column. The breakpoint should activate in Visual Studio.

How the Code Works

The FileGuid of each row is used in the URL to the cache.aspx file. The URL looks like cache.aspx?FileGuid=91d... The FileGuid is not only the primary key for the database but is also the name of the file in the cache folder.

The code from cache.aspx for reading the FileGuid from the URL is:

// DFB: fileGuid is required
if (Request["FileGuid"] == String.Empty)
throw (new Exception("FileGuid missing"));

// DFB: Capture fileGuid
Guid fileguid = new Guid(Request["FileGuid"]);

// DFB: fileGuid is required
if (fileguid == Guid.Empty)
throw (new Exception("FileGuid missing"));

Notice that the FileGuid value from the Request is cast to a Guid datatype in aspx. This allows you to deal with the Guid object more easily and convert to the appropriate SQL data type when you need to.

Fetch From the Client Cache

The client's browser cache is checked by comparing the If-Modified-Since request header against the LastModifiedDate from the database. If the client's browser cache has the most recent copy, a respond code of 304 is returned.

The code from cache.aspx for checking the client's cache If-Modified-Since request header is:

// DFB: connect to database
using (SqlConnection sqlConnection = new System.Data.SqlClient.SqlConnection(ConfigurationManager.ConnectionStrings["Test"].ConnectionString))
{
sqlConnection.Open();

// DFB: Get LastModifiedDate value only. We only fetch more from this row as needed.
using (sqlCommand = new SqlCommand("select LastModifiedDate from SingleDataStore where FileGuid='" + fileguid.ToString() + "'", sqlConnection))
{
lastModifiedDate = DateTime.Parse(sqlCommand.ExecuteScalar().ToString());
}

// DFB: Get Client-side Browser Cache's If-Modified-Since Date
if(!String.IsNullOrEmpty(Request.Headers["If-Modified-Since"]))
{
ifModifiedSince = DateTime.Parse(Request.Headers["If-Modified-Since"], System.Globalization.CultureInfo.InvariantCulture);

// DFB: If the client-side browser cache has the most current version,
// then return 304 and let the client fetch from its own cache
if (lastModifiedDate <= ifModifiedSince)
{
Response.StatusCode = 304;
Response.StatusDescription = "Not Modified";
Response.End();
return;
}
}

From this point on in the code, the content will either come from the local web server's cache or the database.

The Web Server's File System Cache

In order to reduce reads from the database and propagate the content to all web servers in the farm, the cache.aspx file will store a copy on the local file system of each web server in the farm as the content is requested.

The first time a request for the data is made, the data is served from the database and a file is created on the local web server. Each subsequent request is served from the local web server instead of making the unnecessary request for the content from the database if the LastModifiedDate value in the database hasn't changed.

The location of the cache folder is in the web.config. You will want to put the folder at the root of the website. Make sure the IUSR_servername, IWAM_servername, and the ASPNET local accounts have full permission to the cache folder.

// DFB: Build up cache path with file name where file name is guid.dat
// and the guid is the primary key of the table
cachePath = String.Format("{0}{1}.dat", ConfigurationManager.AppSettings["CacheFolder"], fileguid.ToString());

The file name will be FileGuid.dat. The file isn't given an extension associated with the type of content because we do not need to associate it to a type of content until it is requested. It is faster to read the database content type (mime type) than to look up the corresponding file extension in the registry and then determine its content type.

When we set the file's last modified date, the date will be the LastModifiedDate found in the database. This allows the code to determine if the file in the cache is out of sync with the database in future requests for the file.

If the directory is not found, it will be created. If the file is found in the directory, the date of the file is compared to the known LastModifiedDate returned from the database in the above code used for checking the client's cache. If the file is not found in the cache folder, the dirty flag is set to true.

// DFB: if the file system cache folder doesn't exist, create it
if (!Directory.Exists(ConfigurationManager.AppSettings["CacheFolder"]))
{
Directory.CreateDirectory(ConfigurationManager.AppSettings["CacheFolder"]);
dirty = true;
}
else
{
// DFB: Check to see if the file is in the cache
if (File.Exists(cachePath))
{
// DFB: Check the last write time of the file
FileInfo fileInfo = new FileInfo(cachePath);

// DFB: If times are different, mark the flag
// DFB: marking dirty mean to go get new data out of database
lastWriteTime = fileInfo.LastWriteTime;
dirty = (lastWriteTime != lastModifiedDate);
}
else
{
// DFB: file is not found in cache
dirty = true;
}
}

The dirty flag indicates that the content does need to be fetched from the database and written to the cache folder. It is important to understand that the content is read back from the database one chunk at a time inside a WHILE look. This allows for only the chunk size of the entire file to be in memory at a time.

if (dirty)
{
// DFB: Cached File Doesn't Exist or has old data so get data.

// DFB:Write Database Content to file
using (FileStream fileStream = File.Create(cachePath))
{
using (BinaryWriter binaryWriter = new BinaryWriter(fileStream))
{
Int64 sqlRead = 0;
Int64 index = 0;
Byte[] buffer = new Byte[1024];

using (sqlCommand = new SqlCommand("select Content from UploadBytesOfBlob where FileGuid='" + fileguid.ToString() + "'", sqlConnection))
{
using (SqlDataReader sqlDataReader = sqlCommand.ExecuteReader(CommandBehavior.SequentialAccess))
{
if (!sqlDataReader.HasRows)
throw (new Exception("Invalid Id"));

sqlDataReader.Read();

while ((sqlRead = sqlDataReader.GetBytes(0, index, buffer, 0, buffer.Length)) != 0)
{
// WWB: Cast Is Fine Size Buffer < int.MaxValue
binaryWriter.Write(buffer, 0, (int)sqlRead);
index += sqlRead;
}
}
}
}
}

// DFB: Time Stamp The Last Modified Time With
// The Last Modified Time of the FileStore Object
// So that we can track changes
FileInfo fileInfo = new FileInfo(cachePath);
fileInfo.LastWriteTime = lastModifiedDate;
}

Once the file is in the cache folder, you need to construct the response to the client. This includes cache values, the content type, and the content.

// DFB: Setup To Successfully Cache The Page On the Client
Response.Cache.SetLastModified((lastModifiedDate).ToUniversalTime());
Response.Cache.SetCacheability(HttpCacheability.Public);

// DFB: ETag determines if data has changed on server
// DFB: ETag can be server/programmer specific be should
// be generated at the server to indicate data uniqueness
// DFB: For this program, I'll has the row's id with the row's
// last modified date
Response.Cache.SetETag(Hash(Request["FileGuid"].ToString() + lastModifiedDate));

// DFB: Set The Content-Type So the Browser Knows How To Render
// The Response
using (sqlCommand = new SqlCommand("select ContentType from SingleDataStore where FileGuid='" + fileguid.ToString() + "'", sqlConnection))
{
Response.ContentType = sqlCommand.ExecuteScalar().ToString();
}

// DFB: Writing The Cached File Allows IIS To Stream The Output
Response.WriteFile(cachePath);
}
}

Subsequent Requests for the Content

After the content is in a file in the cache folder, all subsequent requests will either come from the client's browser cache or the web server's cache folder until the content LastModifiedDate changes on the server. Since both situations check the LastModifiedDate in the database before returning the content, you can be sure that only the most current version is displayed on the client.

When the data is updated in the database, the client cache and the web server cache versions will be checked for date freshness and the dirty flag will be set to true. At this point, the entire content will be re-fetched from the database and the file recreated on the file system.

Clearing the Web Server Cache

Now that the file is in the web server cache with a file name that corresponds to the FileGuid, you can clear out the cache on a schedule of your choosing. You can delete all the contents, all contents before a certain date, or all contents of a certain content type.

Summary

This caching system illustrates how to build a database-driven web site using the single storage of a database while still using the cache features of the client and web server. The caches are checked using the If-Modified-Since of the cache content and the LastModifiedDate of the database file in order to deliver the most current data. Once the database content is modified, the cached file is no longer valid.

Creating a Data Access Layer for Paging in SQL Server 2005

Introduction

In my previous article, Paging in SQL Server 2005, I demonstrated how to support paging the results of a SQL query inside of a stored procedure. In this two-part article, I will tie that logic to an ASP.NET page with a GridView and DetailsView to demonstrate an efficient way to incorporate paging and updating through an ObjectDataSource. Along the way we will use the Class Diagram tool and the Web Application Project template (added to Visual Studio 2005 by Service Pack 1), although you are free to use methods you are more comfortable with.

Background

Visual Studio 2005 introduced a number of tools that allow web developers to create dynamic web pages more easily than ever. The GridView encapsulates most of the functionality needed for displaying and editing tabular data efficiently, and can bind to several types of data sources. The DetailsView is a sibling to the GridView, supporting most of the same functionality but displaying a single record at a time. While these controls can bind to databases through simple DataSets and SqlDataSources, the ObjectDataSource control provides a more clearly defined, more sophisticated -- but potentially more complicated -- connection. For developers looking to model their SQL data within the application through custom classes, the ObjectDataSource allows them to bind their strongly-typed classes to user controls and add a discrete Business Logic Layer to ensure user interactions follow defined rules of behavior.

The Database Layer

The first order of business is to create a database to work with; the examples below were created in a database called DALdemo, but you can change the connection string to whatever name you choose. Once you have done this, the following SQL statement will create a simple table to work with:

CREATE TABLE [dbo].[Person]
(
[PersonID] [int] IDENTITY(1,1) NOT NULL,
[FirstName] [nvarchar](50) NOT NULL,
[MiddleName] [nvarchar](50) NULL,
[LastName] [nvarchar](50) NOT NULL,
[Height] [decimal](5, 2) NULL,
CONSTRAINT [PK_Person] PRIMARY KEY CLUSTERED
(
[PersonID] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

To populate the table with some preliminary test data, execute the following SQL statements:

INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Frank', 'Edward', 'Holiday', 62.38)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Nancy', 'Estelle', 'Walkafeller', 52.81)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Stewart', 'R.', 'Garvey', 61)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Jeff', 'A.', 'Walker', 66.43)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Walter', 'Albert', 'Johnson', 65.66)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Barbara', 'Connie', 'Jones', 65.38)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('William', '', 'Smith', 59.68)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Gillian', 'Kay', 'Krissinger', 62.43)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Sam', '', 'Stuart', 65.67)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Richard', 'G.', 'Harding', 63.98)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Laura', 'Edward', 'Kinsley', 70.56)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Frank', 'Quincy', 'George', 67.31)
INSERT INTO Person (FirstName, MiddleName, LastName, Height) VALUES ('Frank', 'P.', 'Williams', 62.99)

We will use the following stored procedure to query this table, which incorporates SQL paging logic and uses a ReturnValue parameter to pass back the total number of records.

CREATE PROCEDURE [dbo].[utilPAGE]
@datasrc nvarchar(200)
,@orderBy nvarchar(200)
,@startPage int = 1
,@pageSize int = NULL
,@fieldlist nvarchar(200) = '*'
,@filter nvarchar(200) = ''
AS
SET NOCOUNT ON
DECLARE
@STMT nvarchar(max) -- SQL to execute
,@recct int -- total # of records
SET @recct = -1
IF LTRIM(RTRIM(@filter)) = '' SET @filter = '1 = 1'
IF @pageSize IS NULL BEGIN
SET @STMT = ' SELECT ' + @fieldlist +
' FROM ' + @datasrc +
' WHERE ' + @filter +
' ORDER BY ' + @orderBy
EXEC (@STMT) -- return requested records
END ELSE BEGIN
SET @STMT = ' SELECT @recct = COUNT(*)
FROM ' + @datasrc +
' WHERE ' + @filter
EXEC sp_executeSQL @STMT, @params = N'@recct INT OUTPUT', @recct = @recct OUTPUT

DECLARE
@lbound int,
@ubound int

IF @startPage < 1 SET @startPage = 1
IF @pageSize < 0 SET @pageSize = @pageSize * -1
IF @pageSize < 1 SET @pageSize = 1
SET @lbound = ((@startPage - 1) * @pageSize)
SET @ubound = @lbound + @pageSize + 1
IF @lbound >= @recct BEGIN
SET @ubound = @recct + 1
SET @lbound = @ubound - (@pageSize + 1) -- return last page of records
END
SET @STMT = ' SELECT ' + @fieldlist +
' FROM (
SELECT ROW_NUMBER() OVER(ORDER BY ' + @orderBy + ') AS row, *
FROM ' + @datasrc +
' WHERE ' + @filter +
' ) AS tbl
WHERE row > ' + CONVERT(varchar(9), @lbound) +
' AND row < ' + CONVERT(varchar(9), @ubound) + ' '
EXEC (@STMT) -- return requested records
END
RETURN @recct

We will use the PersonSET stored procedure to handle updates to the Person table. PersonSET handles both inserts and updates, with items being inserted if the @ID parameter is less than one:

CREATE PROCEDURE [dbo].[PersonSET]
(
@ID int = -1
,@FirstName nvarchar(50)
,@MiddleName nvarchar(50)
,@LastName nvarchar(50)
,@Height decimal(5, 2)
)
AS
SET NOCOUNT ON
BEGIN TRY
BEGIN TRANSACTION
IF @ID > 0 BEGIN
UPDATE Person
SET FirstName = @FirstName,
MiddleName = @MiddleName,
LastName = @LastName,
Height = @Height
WHERE PersonID = @ID;
END ELSE BEGIN
INSERT INTO Person (FirstName, MiddleName, LastName, Height)
VALUES (@FirstName, @MiddleName, @LastName, @Height);
SET @ID = SCOPE_IDENTITY()
END
COMMIT TRANSACTION
RETURN @ID
END TRY
BEGIN CATCH
DECLARE
@ERNUM int
SELECT @ERNUM = ERROR_NUMBER()
ROLLBACK TRANSACTION
IF @ERNUM > 0 SET @ERNUM = -@ERNUM
RETURN @ERNUM
END CATCH

PersonDELETE removes one record at a time from Person, based on the ID field:

CREATE PROCEDURE PersonDELETE
(
@ID int = 5
)
AS
SET NOCOUNT ON
DELETE FROM Person
WHERE PersonID = @ID

The Data Object Layer

The next step is to create a new C# ASP.NET Web Application called WebApp1 in Visual Studio. As shown in Illustration 1, enable the option Create directory for solution and set the Solution Name to 15Seconds. This will allow us to work with multiple projects simultaneously.


Illustration 1: Creating the Web Project and Solution

To create the second project, select File | New | Project and create a new C# Class library called Library1. Be sure to change the default setting from Create new Solution to Add to Solution, as shown in Illustration 2.


Illustration 2: Creating the class project within the solution

Feel free to close and delete the Class1.cs file created automatically by Visual Studio, as we will not be using it. To create the class we do need, right-click on the Library1 project in the Solution Explorer, select Add | Class, and create a class named PersonEntity.cs. The quickest way to fill in the class is with the Class Editor; in the Solution Explorer, right-click on PersonEntity.cs and select View Class Diagram.

The class needs internal data members for holding values, and we will follow the common practice of prefixing an underscore to their names to avoid confusion later. To create these, click on the text, type _ID, and press tab. Set Type to be int, and set modifier to private. Add the next three fields, _FirstName, _MiddleName, and _LastName, all with Type set to string. Lastly, add field _Height with Type set to double.

The class needs accessor (set/get) methods so other classes can modify these internal values. To create these, click on the text, type ID, and press tab. Set Type to be int, and set modifier to public. Add the next three properties, called FirstName, MiddleName, and LastName, all with Type set to string. Lastly, add property Height with Type set to double.

At this point, your class diagram should resemble Illustration 3.


Illustration 3: Creating the PersonEntity skeleton with the Class Diagram tool

Right-click on ClassDiagram1.cd and select Rename; change the name to diaPersonEntity.cd. Click on the PersonEntity.cs* tab to see what has been happening in the background. The fields we added will show up as private data members:

private int _ID;
private string _FirstName;
private string _MiddleName;
private string _LastName;
private double _Height;

The properties we added show up as public get/set methods. The ID property is:

public int ID
{
get
{
throw new System.NotImplementedException();
}
set
{
}
}

We need to wire the fields and properties together, so that when someone refers to a class property, the relevant field is used. Change the ID property to:

public int ID
{
get
{
return _ID;
}
set
{
_ID = value;
}
}

Making private data members with public get/set methods probably seems like a lot of extra work, but it is an important part of good design that prevents subtle bugs and supports additional functionality. If a class' internal data members are exposed, other classes may inadvertently modify the internal values without trying. Also, set/get methods allow us to ensure the values used meet any requirements we might have. If, for example, we want to ensure that _ID can only be set to values greater than zero, we can add code in the set method to ensure this:

public int ID
{
get
{
return _ID;
}
set
{
if(value > 0)
{
_ID = value;
}
else
{
throw new Exception("ID must be greater than zero");
}

}
}

Alternatively, if we want the internal values to return an empty string when they are set to null, we can change each of the get methods for string properties to use C#'s ?? operator. This is similar to SQL Server's ISNULL() function: if the first value specified is NULL, the second value specified is used instead. This can be used as shown below:

public string FirstName
{
get
{
return _FirstName ?? "";
}
set
{
_FirstName = value;
}
}

Note that the above code isn't strictly necessary for this project; it is simply an example of things you can do if you want strict validation or other behavior within your custom class (which is the primary benefit of making custom classes). If you still aren't sold on the idea, see http://msdn2.microsoft.com/en-us/library/65zdfbdt(VS.71).aspx for a more complete explanation of the benefits.

Since we want to access this class from another project, be sure to change the class declaration to:

namespace Library1
{
public class PersonEntity
{

At its simplest, the PersonEntity.cs class should resemble:

using System;

namespace Library1
{
public class PersonEntity
{
private int _ID;
private string _FirstName;
private string _MiddleName;
private string _LastName;
public double _Height;

public int ID
{
get
{
return _ID;
}
set
{
_ID = value;
}
}

public string FirstName
{
get
{
return _FirstName;
}
set
{
_FirstName = value;
}
}

public string MiddleName
{
get
{
return _MiddleName ?? "";
}
set
{
_MiddleName = value;
}
}

public string LastName
{
get
{
return _LastName;
}
set
{
_LastName = value;
}
}

public double Height
{
get
{
return _Height;
}
set
{
_Height = value;
}
}
}
}

The Data Access Layer

Continuing to build from the back-end data store towards the user interface, we next need to create the Data Access Layer. The Data Access Layer is a class where you define the connection between an ObjectDataSource, a custom class (in our case, PersonEntity) and the data store for instances of this class (in our case, the Person table in SQL). After you define the relevant methods in the DAL class, you will configure the ObjectDataSource object to use the correct method for each necessary function, and can then set the ASP.NET elements that will use them.

Again, this can seem like a lot of overhead, but the extra work helps to make future maintenance easier. Instead of having specific SQL information embedded in each web page, you will define it once and be able to use it from any page that needs it. When you inevitably need to change your SQL table, you can update the class and DAL instead of editing multiple copies of the same queries throughout your web site.

To prepare the web application, right-click on WebApp1 in Solution Explorer and select Set As StartUp Project. Next, right-click on References within the WebApp1 project and select Add Reference. Click on the Projects tab and double-click Library1 as a new reference.

Rather than hard-coding the database connection string into our DAL class, we will store it in the web.config, where it can be modified as needed. Note that in a production environment, you should encrypt the database connection string for better security. To add your connection string, double-click the web.config file in the WebApp1 project. Modify it to have the following lines, replacing the user id and password with valid credentials and setting the data source and initial catalog with the host and database name you are using, respectively:

>connectionStrings>
>add name="15seconds"
connectionString="user id=demo; pwd=demopass; data source=localhost; initial catalog=15SecondsDAL;"
providerName="System.Data.SqlClient"
/>
>/connectionStrings>

To create the Data Access Layer class, right-click in the WebApp1 class and select Add | Add ASP.NET Folder | App_Data (this is the standard location for DAL classes). Right-click on the App_Data folder and select Add | Class and set the name to PersonDAL.cs.

The DAL class must support four standard SQL functions: Select, Insert, Update, and Delete. However, the PersonSET stored procedure actually handles inserts and updates automatically, so we can use the same DAL method for both of these functions. In addition to these functions, the DAL class must return the total number of instances available (in our case, the number of records in the Person table). We will store the record count value returned from the utilPAGE stored procedure, so that when a bound data control needs the total number of records for its paging interface, the DAL will return that value instead of querying the database again.

At run-time, the ObjectDataSource settings will determine which methods to call from your DAL class for select, insert, update, delete, and count, and what information is passed into those methods. We will need to ensure that the ObjectDataSource configuration matches the DAL methods. In fact, enabling and disabling features such as sorting and paging support within the ObjectDataSource will affect which methods are called, so you may need to create overloaded versions of your DAL methods depending on the combinations of features you intend to support. In this example, however, we will include support for sorting and paging and will create methods which meet these needs.

A straightforward implementation of the PersonDAL class is shown below. The System.ComponentModel library lets us add declarative attributes to the source code that will help the .NET compiler catch any mistakes we might make with the DAL methods, and will help when the ObjectDataSource is deciding which DAL method to use.

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Configuration;
using System.Data;
using System.Data.SqlClient;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.Configuration;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using Library1;
using WebApp1;

namespace WebApp1.App_Data
{
[DataObject]
public class PersonDAL
{
private int _count = 0;

[DataObjectMethod(DataObjectMethodType.Select)]
public List Select(int startRecIdx, int maxRows, String sortedBy)
{
List records = new List();
SqlConnection conn = null;
SqlDataReader dr = null;
SqlParameter ret = null;
try
{
conn = new SqlConnection(WebConfigurationManager.ConnectionStrings["15seconds"].ConnectionString);
conn.Open();
SqlCommand cmd = new SqlCommand("utilPAGE", conn);
cmd.CommandType = CommandType.StoredProcedure;
if(String.IsNullOrEmpty(sortedBy))
{
sortedBy = "LastName";
}
int startpg = (startRecIdx / maxRows) + 1;
cmd.Parameters.AddWithValue("@datasrc", "Person");
cmd.Parameters.AddWithValue("@orderBy", sortedBy);
cmd.Parameters.AddWithValue("@startPage", startpg);
cmd.Parameters.AddWithValue("@pageSize", maxRows);
ret = new SqlParameter("ReturnValue", SqlDbType.Int);
ret.Direction = ParameterDirection.ReturnValue;
cmd.Parameters.Add(ret);
dr = cmd.ExecuteReader();
while(dr.Read())
{
PersonEntity item = new PersonEntity();
item.ID = (int)dr["PersonID"];
item.FirstName = dr["FirstName"].ToString();
item.MiddleName = dr["MiddleName"].ToString();
item.LastName = dr["LastName"].ToString();
//item.Height = (double)(dr["Height"] ?? 0.0);
item.Height = Convert.ToDouble(dr["Height"].ToString());
records.Add(item);
}
}
catch(Exception ex)
{
throw ex;
}
finally
{
if(dr != null)
{
dr.Close();
}
if(conn != null)
{
conn.Close();
}
}
_count = Convert.ToInt32(ret.Value);
return records;
}

[DataObjectMethod(DataObjectMethodType.Update)]
public static bool Update(PersonEntity item)
{
bool retval = false;
SqlConnection conn = null;
try
{
conn = new SqlConnection(WebConfigurationManager.ConnectionStrings["15seconds"].ConnectionString);
conn.Open();
SqlCommand cmd = new SqlCommand("PersonSET", conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("@ID", item.ID);
cmd.Parameters.AddWithValue("@FirstName", item.FirstName);
cmd.Parameters.AddWithValue("@MiddleName", item.MiddleName);
cmd.Parameters.AddWithValue("@LastName", item.LastName);
cmd.Parameters.AddWithValue("@Height", item.Height);
SqlParameter ret = new SqlParameter("ReturnValue", SqlDbType.Int);
ret.Direction = ParameterDirection.ReturnValue;
cmd.Parameters.Add(ret);
cmd.ExecuteNonQuery();
int result = Convert.ToInt32(ret.Value);
retval = result > 0;
}
catch(Exception ex)
{
throw ex;
}
finally
{
if(conn != null)
{
conn.Close();
}
}
return retval;
}

[DataObjectMethod(DataObjectMethodType.Delete)]
public static bool Delete(PersonEntity item)
{
bool retval = false;
SqlConnection conn = null;
try
{
conn = new SqlConnection(WebConfigurationManager.ConnectionStrings["15seconds"].ConnectionString);
conn.Open();
SqlCommand cmd = new SqlCommand("PersonDELETE", conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("@ID", item.ID);
SqlParameter ret = new SqlParameter("ReturnValue", SqlDbType.Int);
ret.Direction = ParameterDirection.ReturnValue;
cmd.Parameters.Add(ret);
cmd.ExecuteNonQuery();
int result = Convert.ToInt32(ret.Value);
retval = result > 0;
}
catch(Exception ex)
{
throw ex;
}
finally
{
if(conn != null)
{
conn.Close();
}
}
return retval;
}

public int GetRecordCount(int startRecIdx, int maxRows, string sortedBy)
{
return _count;
}

public int GetRecordCount()
{
return _count;
}

}
}

The Select method begins by creating an empty List collection that can hold PersonEntity objects. After declaring some ADO.NET objects, the code enters a try block. This is important, as any SQL objects used in this method need to be closed properly whether the method succeeds or fails. A connection is created based on the web.config connection string entry named 15seconds, and preparations are made for running the utilPAGE stored procedure. Because utilPAGE needs to have a non-blank orderBy argument, the LastName field is used as a default if nothing else is supplied. The parameters for table name, sort field list, starting page number, and page size are added, as is a ReturnValue parameter that will receive anything which the stored procedure explicitly sends with a return statement. The query is executed, the results are converted into new PersonEntity objects, and the PersonEntity objects are added to the List collection. The SQL objects are properly closed in the finally clause, and (because the return value can only be accessed after the SqlDataReader object is closed) the return value is stored in _count.

The Update method receives a PersonEntity object when it is called. Because the only difference between an Insert and an Update will be the value of the ID field in this object, we will treat both the same and let the stored procedure handle the necessary logic. The Update method will largely imitate the Select method, except that the stored procedure and arguments are different and no result set will be processed. The return value will be checked to see if the update succeeded, with numbers less than one indicating an error.

The Delete method receives a PersonEntity object when it is called, however only the ID field will be populated. (What fields are populated for this is controlled by the DataKeyNames property, which we will see later.) The Delete method is nearly identical to the Update method, except for the stored procedure and arguments. The return value will be checked to see if the delete succeeded, with numbers less than one indicating an error.

Conclusion

The work so far has focused on laying a foundation, and just as with building an actual foundation, there has been a lot of critical activity but little in the way of visible results. In the second part of this article, we will create an interactive web page with very little code that uses everything created so far.