What is the difference between Grid view,Data list, and repeater?

Grid view and data grid by default display all the data in tabular format i.e. in table and rows. Developer has no control to change the table data display of datagrid.

Data list also displays data in a table but gives some flexibility in terms of displaying data row wise and column wise using the repeat direction property.

Repeater control is highly customizable. It does not display data in table by default. So you can customize from scratch the way you want to display data.

Advertisements

Introduction to Mixins For the C# Developer

If you are a C# developer then you may keep hearing about all the cool kids from Smalltalk, Ruby, Python, Scala using these crazy things called mixins. You may even be a little jealous, not because you want the feature, but because they have a feature with an awesome name like “mixin”. The name is pretty sweet. And in fact, it is fairly self-explanatory since mixins are all about “mixing-in behaviors”.

It is actually an Aspect Oriented Programming (AOP) term which is defined by wikipedia as:

A mixin is a class that provides a certain functionality to be inherited by a subclass, but is not meant to stand alone. Inheriting from a mixin is not a form of specialization but is rather a means to collect functionality. A class may inherit most or all of its functionality by inheriting from one or more mixins through multiple inheritance.

No wonder people are confused! That isn’t exactly clear. So let’s try to clear it up just a tiny bit…

Let’s say that we have a C# class that looks like this:

public class Person{
public string Name { get; set; }
public int Age { get; set; }
}
Looks good. And we have another C# class that looks like this:

public class Car{
public string Make { get; set; }
public string Model { get; set; }
}
Mmmmkay. These classes obviously don’t have anything to do with one another, and hopefully they aren’t in the same object hierarchy. What if we need to do something like, say, serialize to XML? In .NET we would normally decorate the type with a SerializableAttribute and then we would fire up an instance of the XmlSerializer by abstracting it into a method like this:

public static string SerializeToXml(Object obj)
{
var xmlSerializer = new XmlSerializer(obj.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, obj);
}
return Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}
So now when we need to serialize a class, we can just do this:

string xml = XmlHelper.SerializeToXml(person);
That isn’t too bad, but what if we wanted to do something like this:

string xml = person.SerializeToXml();

In C# 3.0 and later we can introduce an extension method to get exactly that behavior:

public static class XmlExtensions
{
public static string SerializeToXml(this Object obj)
{
var xmlSerializer = new XmlSerializer(obj.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, obj);
}
return Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}

}
Now you see what we are doing here, we are creating an extension method on Object so that we can use this on any class. (Well, it really is just a compiler trick, but it “appears” that we have this method on every class) We are performing a very weak form of a mixin, because we now have a similar behavior for any object, even if it doesn’t necessarily have the same inheritance hierarchy. So, why do I say that this is a “weak” mixin? We are sharing behavior across multiple classes, right?

Well, I say it is “weak”, but I really should say that it is not a mixin at all because true mixins have state as well as methods. For example, in the above scenario, let’s say we wanted to cache the result of the serialization so that the next time we called it, we would get the same result? This obviously isn’t something you’d want to do unless you had change tracking, but in the C# extension method, this is impossible. There is no way to associate state with the particular instance.

So how do other languages support this behavior? Well, Ruby supports it with a concept called modules which look very similar to classes and allow those modules to be “included” with classes. Ruby even allows you to apply mixins at runtime, which could be a very powerful, albeit potentially confusing, feature. Python solves the problem by allowing multiple inheritance, so I guess you could say that they aren’t really mixins either. It solves all the same problems as mixins though, but it does add a bit of complexity to the implementation (See the Diamond Problem).

In terms of being similar to C#, the Scala solution is the most interesting. Perhaps because Scala is a statically typed and compile-time bound language (for the most part), and so it has some of the hurdles with implementing mixins that C# would face. In Scala the feature is called “traits” and traits can be applied to both classes and instances during construction.

I’m not going to show you the Scala implementation of traits, but what I am going to do is make up a syntax for C# traits so that we can implement the behavior we want above. So first we are going to have to decide on a keyword for this new construct, and I am going to just use the Scala “trait” keyword. In Scala traits look like classes, because that is essentially what they are. They are classes which are not inherited from, but rather “applied” to another class. In fact, traits can even descend from abstract classes.

Nothing Past This Line is Valid C#!

So our C# trait might look something like this:

trait XmlSerializer{
public string SerializeToXml()
{
var xmlSerializer = new XmlSerializer(this.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, this);
}
return Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}
}
Neato. We could then take this trait, and apply it to our class like using the “with” keyword:

public class Person with XmlSerializer {
public string Name { get; set; }
public int Age { get; set; }
}
Cool, so we have replicated the behavior of the extension method. But now how about that caching? Well, we wouldn’t even need to touch the Person class, we would only have to change the trait:

trait XmlSerializer{
private string xml;

public string SerializeToXml()
{
if (String.IsNullOrEmpty(xml)){
var xmlSerializer = new XmlSerializer(this.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, this);
}
xml = Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}
return xml;
}
}
Nice. Now if you think about it, if traits had the ability to support abstract members, then you wouldn’t need interfaces at all, would you? Well, it just so happens that Scala doesn’t have interfaces, and uses traits in this exact capacity. If you declare a mixin with all abstract methods, then have an interface. It becomes even more powerful when you declare mixins with abstract methods that are used by concrete methods that are also declared within the mixin. Head hurting yet? It is some pretty powerful stuff, and should be used as any powerful tool should be, judiciously.

One final note about Scala traits, which I mentioned earlier, is that they can be applied to instances during construction. This is an interesting behavior because it allows individual instances of classes to have a trait applied. If you think about trying to apply an interface at runtime, then you will realize that any trait that you would apply at runtime would have to contain no abstract methods, otherwise you would have no way to tell if the class implemented the methods that were being applied. This is why Scala only allows traits to be applied at construction of an instance, this way Scala can do checking at compile time to determine if the class implements all of the abstract methods that are needed. So, in C# this syntax would look something like this:

var person = new Person() with XmlSerializable;
And if we needed to pass this into a method, we could do this:

public string DoSomething(Person with XmlSerializable person){
return person.serializeToXml();
}

Checking XML for Semantic Equivalence in C#

I was writing a bit of code for a small project and it was creating some XML that I need to pass to another application. So in order to test this functionality, I needed to compare the XML generated by my API against some hard coded XML. I started off with this:


var expectedXml = @"testvalue";

var actualXml = MyAPI.DoSomeStuff().GenerateXml();

Assert.Equal(expectedXml, actualXml);

But I quickly found out that this wasn’t going to scale. Once the XML got too large, it would carry over too far making my tests read horribly. So, I did this:


var expectedXml = @"

testvalue

";

var actualXml = MyAPI.DoSomeStuff().GenerateXml();

Assert.Equal(expectedXml, actualXml);

The problem was that now the XML wasn’t equivalent. Well, it is semantically equivalent, it just isn’t equivalent for a string comparison. The reason for this is that all of that extra white space and EOL characters screws up the comparison. You might be thinking, well, just strip out white space and EOL characters. It ain’t that easy. What happens when that white space is inside of an xml element. Well, at that point it becomes meaningful for comparison purposes.

So I didn’t want to write my own comparison code (who wants to write that?) so I started hunting around. Since I was already using the .NET 3.5 XElement libraries, I started looking there first. I came across a little method on the XNode class called DeepEquals, and guess what, it does exactly what I want. It compares a node and all child nodes for semantic equivalence. I’m sure that there are probably a few gotchas in there for me, but after preliminary tests, it appears to work perfectly.

I created a little method to do my XML asserts for me:


private void AssertEqualXml(string expectedXml, string actualXml)
{
Assert.IsTrue(XNode.DeepEquals(XElement.Parse(expectedXml), XElement.Parse(actualXml)),
String.Format("{0} \n does not equal \n{1}", actualXml, expectedXml));
}

There you have it. It loads the expected and actual XML into XElements and then calls “DeepEquals” on them. Now I can write my XML to compare is the most readable fashion and not worry about how they are going to compare.

Easy And Safe Model Binding In ASP.NET MVC

A little over a year ago (wow, it seems like only yesterday), I made a post called Think Before You Bind. In this post, I presented to you exactly why you want to make sure that when you are doing automatic binding to models in ASP.NET MVC, you need to absolutely make sure that you are only binding to the properties that you expect. The reason for this, is that in ASP.NET MVC you really have no way of telling what was supposed to be posted to the server, and what wasn’t, so someone could tamper with, or create fake, post data and overwrite properties that you weren’t expecting to be changed.

This isn’t something unexpected, but it is definitely not something that Web Forms developers have to really consider when building their solutions. On the flip side though, ASP.NET tracks what fields are supposed to be on the form which ties you into a fairly static number of fields, unless you want to hack your way around that model. And I think many of us know how ugly that can get…

So, the basic problem is that ASP.NET MVC doesn’t care what you render to the user, and it doesn’t care what gets posted back. If the post value name and the property name match, then ASP.NET MVC is going to map the value onto the property, even if you never rendered a field by that name. It is a concern, but one that is easily avoided by using a very simple approach. And that approach is to use models that are specific to your views, usually to referred to as View Models (not to be confused with the MVVM – Model-View-ViewModel pattern). View models are useful for a number of reasons, but in this case we are leveraging them so that you can expose a surface with only the properties that you want bound,
And that approach works well, but really only if your objects are ever being used in one context. What happens if you need to edit an object as both an end-user and an administrator, certainly you don’t want to allow users to edit the same properties as an admin. Well, if you used the same view model, then you would be setting yourself up for a potential security hole, since the end user could (as we explained earlier) add some bad data into the post, and update the view model in ways that you didn’t expect. So, how do you fix this?

Well, one solution is to create two different view models, one for the end user, one for the admin. But that begs the question, what if we need to have even more contexts? Or what if we had interfaces which edited pieces of a larger object? Do we just keep introducing more and more view models? You could, but that would be a lot of work…. if only we had some way to create a view on top of an object which would expose only the properties that we wanted to see.
For the astute, you’ll notice that I just explained interfaces, and last time I checked, C# has those. Thanks to a comment from my good friend Simone Chiaretta (and others), the idea of using a single view model with different interfaces was proposed. Then instead of using the class type to bind, you could just use the interface! Something like this is the result:


public class PersonViewModel: IEditPersonAsUser
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Role { get; set; }
public string EmailAddress { get; set; }
}

public interface IEditPersonAsUser
{
string FirstName { get; set; }
string LastName { get; set; }
string EmailAddress { get; set; }
}

internal interface IEditPersonAsAdmin
{
string FirstName { get; set; }
string LastName { get; set; }
string EmailAddress { get; set; }
string Role { get; set; }
}

The downside to this is that you can no longer use the automatic model binding that ASP.NET MVC gives you. The reason for this is fairly self explanatory, how would ASP.NET MVC know which model to bind for this method?


public ActionResult Edit(IEditPersonAsUser personModel)

The answer is, “it wouldn’t”. You could theoretically put that interface on any number of implementations, and so there is no quick and reliable way to pick out the right implementer. (Well, when I say “no way”, you could look for a single implementer, and if you find more than one, you just throw an exception) This is easy enough to work around though. You can write code to perform the automatic binding manually (this is just screaming for a custom model binder!):


var personViewModel = new PersonViewModel();
UpdateModel(personViewModel);

Notice here that we are telling the UpdateModel method to bind the view model using the IEditPersonAsUser interface. This works well, but it would still be better if we could avoid those few lines of code over and over. We could put a small method like this in a base controller, and make it even easier

var personViewModel = Bind()

A tiny bit easier and cleaner. Now you can create view models for specific entities, and then you can reuse them in multiple scenarios without having to worry a bunch about white lists, black lists, or creating a ton of different view model classes.

As I mentioned earlier though, this is just crying out for a custom model binder, we could set one up so that when it sees an interface type it simply searches for the one type which implements that interface and then throws an exception if it finds more than one. Since our whole purpose here is to use an interface to constrain a single view model to different views of the same model, that shouldn’t hurt us at all. Maybe I’ll implement that for you in a future post.

I hope that you found this post informative and useful, if you have any ideas for things that could be improved or modified, please post me a comment!

Resetting local Ports and Devices from .NET

Currently, I am working on C# applications that communicate with several external devices connected via USB ports. In rare cases the ports just stop working correctly, so we needed a programmatic approach to reset them. Doing this by using C# is not trivial. The solution we implemented uses the following components:
– C#’s SerialPort class
– WMI (Windows Management Instrumentation)
– P/Invoke with calls to the Windows Setup API

Accessing a port using C#
Using a certain port with C# is rather simple. The .NET framework provides the SerialPort class as a wrapper to the underlying Win32 port API. You can simply create a new instance of the class by providing the name of an existing port. On the instance object you have methods to open or close the port or to read and write data from its underlying stream. It also provides events to notify any listeners when data or errors were received. Here is a small example on how to open a port by first checking if the given port name exists in the operating system:


public SerialPort OpenPort(string portName)
{
if (!IsPortAvailable(portName))
{
return null;
}

var port = new SerialPort(portName);

try
{
port.Open();
}
catch (UnauthorizedAccessException) { ... }
catch (IOException) { ... }
catch (ArgumentException) { ... }
}

private bool IsPortAvailable(string portName)
{
// Retrieve the list of ports currently mounted by
// the operating system (sorted by name)
string[] ports = SerialPort.GetPortNames();
if (ports != null && ports.Length > 0)
{
return ports.Where(new Func((s) =>
{
return s.Equals(portName,
StringComparison.InvariantCultureIgnoreCase);
})).Count() == 1;
}
return false;
}

In rare cases the Open method of the port threw an IO Exception in our applications. This situation was not reproducible but also not acceptable. We noticed that after deactivating the port in the device manager and reactivating it, everything was working fine again. So we searched for a solution to do exactly the same thing from code.

Enable/Disable Devices
First of all a function or set of functions was needed to disable and enable a certain port. This can not directly be done from C# and needs some P/Invoke calls to the Win32 API. Luckily others had similar problems and so I found a very good solution at http://www.eggheadcafe.com/community/csharp/2/10315145/enabledisable-comm-port-by-programming.aspx. It uses the device installation functions from the Win32 Setup API. All you need is to retrieve the class GUID for the device set and the instance Id for the specific device you want to disable or enable. Just have a look at the link for the code or download the accompanying code for this blog post.

Retrieving the Ports Instance Id
Last thing to do is to acquire the correct instance Id for our port. We need a method that takes in the port name and retrieves the corresponding instance Id. For the exact definition of an instance Id in windows terms have a look at http://msdn.microsoft.com/en-us/library/windows/hardware/ff541224(v=vs.85).aspx. In our case we’d like to use the Plug ‘n’ Play device Id that can also be seen in the properties window of a device inside the device manager. For this purpose we are going to use WMI. If you need information about WMI have a look at http://msdn.microsoft.com/en-us/library/windows/desktop/aa394582(v=vs.85).aspx. WMI provides the Win32_SerialPort class that can be used to iterate over all mounted ports of the operating system. Two properties of the Win32_SerialPort class are important for us: The DeviceID property which represents the port name and the PNPDeviceID which gives the Plug ‘n’ Play instance Id. Notice while this works perfectly for Plug ‘n’ Play devices it may not work for other kinds of devices.

ManagementObjectSearcher searcher =
new ManagementObjectSearcher("select * from Win32_SerialPort");
foreach (ManagementObject port in searcher.Get())
{
if (port["DeviceID"].ToString().Equals(portName))
{
instanceId = port["PNPDeviceID"].ToString();
break;
}
}

If we found the appropriate instance Id we can use it together with the Win32 Setup API to retrieve a device info set and the corresponding device info data. If a device info for the instance Id was found we can use its class GUID and the instance Id to disable and enable the device. The following method is used to reset a port with a given instance Id:

public static bool TryResetPortByInstanceId(string instanceId)
{
SafeDeviceInfoSetHandle diSetHandle = null;
if (!String.IsNullOrEmpty(instanceId))
{
try
{
Guid[] guidArray = GetGuidFromName("Ports");

//Get the handle to a device information set for all
//devices matching classGuid that are present on the
//system.
diSetHandle = NativeMethods.SetupDiGetClassDevs(
ref guidArray[0],
null,
IntPtr.Zero,
SetupDiGetClassDevsFlags.DeviceInterface);

//Get the device information data for each matching device.
DeviceInfoData[] diData = GetDeviceInfoData(diSetHandle);

//Try to find the object with the same instance Id.
foreach (var infoData in diData)
{
var instanceIds =
GetInstanceIdsFromClassGuid(infoData.ClassGuid);
foreach (var id in instanceIds)
{
if (id.Equals(instanceId))
{
//disable port
SetDeviceEnabled(infoData.ClassGuid, id, false);
//wait some milliseconds
Thread.Sleep(200);
//enable port
SetDeviceEnabled(infoData.ClassGuid, id, true);
return true;
}
}
}
}
catch (Exception)
{
return false;
}
finally
{
if (diSetHandle != null)
{
if (diSetHandle.IsClosed == false)
{
diSetHandle.Close();
}
diSetHandle.Dispose();
}
}
}
return false;
}

With the code set up so far we can now easily reset a port in case we’re getting an IO Exception while trying to open the port. We just have to call the method inside our exception handler and try to open the port again. That solved our initial problem. Be aware that it may take some time to disable and re-enable the port. It may be a good idea to do it inside a separate thread if you’re working inside an application using a GUI.

ASP.NET Web API 4 Beta – Custom Unity Http Controller Factory

The ASP.NET Web API provides support for Restful services that make use of JSON and the concepts of ASP.NET MVC. Instead of the Controller base class in MVC the Web API uses the ApiController class. In order to use a dependency injection container like Unity for controller creation you need to create your own controller factory. This blog post shows how to build a custom Http controller factory together with the Unity application block.
A custom controller factory
Each controller factory in ASP.NET Web API must implement the IHttpControllerFactory interface. This again is different from MVC where you have to implement IControllerFactory. The IHttpControllerFactory provides just two methods – CreateController and ReleaseController. I think by reading the names it’s obvious what both methods should do. Together with Unity we will use these methods to create and release controllers with the help of the Unity container:


public class UnityControllerFactory : IHttpControllerFactory
{
private readonly IUnityContainer container;
private readonly IHttpControllerFactory defaultFactory;
private readonly HttpConfiguration configuration;
public UnityControllerFactory(IUnityContainer container,
HttpConfiguration configuration)
{
this.configuration = configuration;
this.container = container;
this.defaultFactory = new DefaultHttpControllerFactory(configuration);
}

public IHttpController CreateController(
HttpControllerContext controllerContext, string controllerName)
{
if (container.IsRegistered(controllerName))
{
var controller = container.Resolve(controllerName);
controllerContext.ControllerDescriptor =
new HttpControllerDescriptor(
this.configuration,
controllerName,
controller.GetType());
controllerContext.Controller = controller;
return controller;
}
return defaultFactory.CreateController(controllerContext, controllerName);
}

public void ReleaseController(IHttpController controller)
{
if (container.IsRegistered(controller.GetType()))
{
container.Teardown(controller);
}
else
{
defaultFactory.ReleaseController(controller);
}
}
}

Let’s have a look at the code. We hold a reference to the Unity container in order to resolve controller dependencies. We also define a default factory in case a controller is not registered with unity. The framework provides the DefaultHttpControllerFactory which is normally used to create API controller. Another reference we need is to the current Http configuration. The configuration is needed to create a proper descriptor for the created controller.
The CreateController method tries to find the controller with the given name inside the Unity container. If the controller is not registered with Unity the default factory is used to create an instance of the controller. If the controller is registered it is resolved by Unity and a proper controller descriptor is generated. The descriptor together with the Http configuration is needed to build up a correct Http context in which the controller is executed.
The ReleaseController method again checks if the controller is registered with Unity or not and calls the appropriate release method.
Initializing the controller factory
If you have the custom Http controller factory there’s only one last step to use it. You need to instantiate the Unity container inside the global.asax and pass it to the factory. Additionally you need to set the factory as the default factory for creating API controllers. Here’s the short code snippet:

protected void Application_Start()
{

var container = new UnityContainer();
//register the customer controller with Unity
container.RegisterType("customer");
//Add additional controllers here
var factory = new UnityControllerFactory(container,
GlobalConfiguration.Configuration);
GlobalConfiguration.Configuration.ServiceResolver.SetService(
typeof(IHttpControllerFactory), factory);

}


I think the code is more or less self-explanatory. By using the static Configuration property of the GlobalConfiguration class you get access to the current Http configuration object, which is needed for the factory to build the controller descriptor. Pay attention to the controller name passed to the RegisterType method. This name should resemble the name of the controller used by API calls because that name is passed to the controller factory.