What is the difference between Grid view,Data list, and repeater?

Grid view and data grid by default display all the data in tabular format i.e. in table and rows. Developer has no control to change the table data display of datagrid.

Data list also displays data in a table but gives some flexibility in terms of displaying data row wise and column wise using the repeat direction property.

Repeater control is highly customizable. It does not display data in table by default. So you can customize from scratch the way you want to display data.

Leverage the C# Preprocessor

Like other languages in the C-family, C# supports a set of ‘preprocessor’ directives, most notably #define, #if and #endif (technically, csc.exe does not literally have a preprocessor as these symbols are resolved at the lexical analysis phase, but no need to split hairs…).

The #define directive allows you to set up custom symbols which control code compilation. Be very aware that unlike C(++), C#’s #define does not allow you to create macro-like code. Once a symbol is defined, the #if and #endif maybe used to test for said symbol. By way of a common example:

#define DEBUG
using System;

public class MyClass
{
public static void Main()
{
#if DEBUG
Console.WriteLine(“DEBUG symbol is defined!”);
#endif
}
}
When you use the #define directive, the symbol is only realized within the defining file. However if you wish to define project wide symbols, simply access your project’s property page and navigate to the “Configuration Properties | Build” node and edit the “Conditional Compilation Constants” edit field. Finally, if you wish to disable a constant for a given file, you may make use of the #undef symbol.

Why Doesn’t C# Implement “Top Level” Methods?

C# requires that every method be in some class, even if it is a static method in a static class in the global namespace. Other languages allow “top level” functions. A recent stackoverflow post asks why that is.

I am asked “why doesn’t C# implement feature X?” all the time. The answer is always the same: because no one ever designed, specified, implemented, tested, documented and shipped that feature. All six of those things are necessary to make a feature happen. All of them cost huge amounts of time, effort and money. Features are not cheap, and we try very hard to make sure that we are only shipping those features which give the best possible benefits to our users given our constrained time, effort and money budgets.

I understand that such a general answer probably does not address the specific question.

In this particular case, the clear user benefit was in the past not large enough to justify the complications to the language which would ensue. By restricting how different language entities nest inside each other we (1) restrict legal programs to be in a common, easily understood style, and (2) make it possible to define “identifier lookup” rules which are comprehensible, specifiable, implementable, testable and documentable.

By restricting method bodies to always be inside a struct or class, we make it easier to reason about the meaning of an unqualified identifier used in an invocation context; such a thing is always an invocable member of the current type (or a base type).

Now, JScript.NET has this feature. (And in fact, JScript.NET goes even further; you can have program statements “at the top level” too.) A reasonable question is “why is this feature good for JScript but bad for C#?”

First off, I reject the premise that the feature is “bad” for C#. The feature might well be good for C#, just not good enough compared to its costs (and to the opportunity cost of doing that feature instead of a more valuable feature.) The feature might become good enough for C# if its costs are lowered, or if the compelling benefit to customers becomes higher.

Second, the question assumes that the feature is good for JScript.NET. Why is it good for JScript.NET?

It’s good for JScript.NET because JScript.NET was designed to be a “scripty” language as well as a “large-scale development” language. “JScript classic”‘s original design as a scripting language requires that “a one-line program actually be one line”. If your intention is to make a language that allows for rapid development of short, simple scripts by novice developers then you want to minimize the amount of “ritual incantations” that must happen in every program. In JScript you do not want to have to start with a bunch of using clauses and define a class and then put stuff in the class and have a Main routine and blah blah blah, all this ritual just to get Hello World running.

C# was designed to be a large-scale application development language geared towards pro devs from day one; it was never intended to be a scripting language. It’s design therefore encourages enforcing the immediate organization of even small chunks of code into components. C# is a component-oriented language. We therefore want to encourage programming in a component-based style and discourage features that work against that style.

This is changing. “REPL” languages like F#, long popular in academia, are increasing in popularity in industry. There’s a renewed interest in “scripty” application programmability via tools like Visual Studio Tools for Applications. These forces cause us to re-evaluate whether “a one line program is one line” is a sensible goal for hypothetical future versions of C#. Hitherto it has been an explicit non-goal of the language design.

(As always, whenever I discuss the hypothetical “next version of C#”, keep in mind that we have not announced any next version, that it might never happen, and that it is utterly premature to think about feature sets or schedules. All speculation about future versions of unannounced products should be taken as “for entertainment purposes only” musings, not as promises about future offerings.)

We are therefore considering adding this feature to a hypothetical future version of C#, in order to better support “scripty” scenarios and REPL evaluation. When the existence of powerful new tools is predicated upon the existence of language features, that is points towards getting the language features done.

Why is deriving a public class from an internal class illegal?

In C# it is illegal to declare a class D whose base class B is in any way less accessible than D. I’m occasionally asked why that is. There are a number of reasons; today I’ll start with a very specific scenario and then talk about a general philosophy.

Suppose you and your coworker Alice are developing the code for assembly Foo, which you intend to be fully trusted by its users. Alice writes:

public class B
{
public void Dangerous() {…}
}

And you write

public class D : B
{
… other stuff …
}

Later, Alice gets a security review from Bob, who points out that method Dangerous could be used as a component of an attack by partially-trusted code, and who further points out that customer scenarios do not actually require B to be used directly by customers in the first place; B is actually only being used as an implementation detail of other classes. So in keeping with the principle of least privilege, Alice changes B to:

internal class B
{
public void Dangerous() {…}
}

Alice need not change the accessibility of Dangerous, because of course “public” means “public to the people who can see the class in the first place”.

So now what should happen when Alice recompiles before she checks in this change? The C# compiler does not know if you, the author of class D, intended method Dangerous to be accessible by a user of public class D. On the one hand, it is a public method of a base class, and so it seems like it should be accessible. On the other hand, the fact that B is internal is evidence that Dangerous is supposed to be inaccessible outside the assembly. A basic design principle of C# is that when the intention is unclear, the compiler brings this fact to your attention by failing. The compiler is identifying yet another form of the Brittle Base Class Failure, which long-time readers know has shown up in numerous places in the design of C#.

Rather than simply making this change and hoping for the best, you and Alice need to sit down and talk about whether B really is a sensible base class of D; it seems plausible that either (1) D ought to be internal also, or (2) D ought to favour composition over inheritance. Which brings us to my more general point:

More generally: the inheritance mechanism is, simply the fact that all heritable members of the base type are also members of the derived type. But the inheritance relationship semantics are intended to model the “is a kind of” relationship. It seems reasonable that if D is a kind of B, and D is accessible at a location, then B ought to be accessible at that location as well. It seems strange that you could only use the fact that “a Giraffe is a kind of Animal” at specific locations.

In short, this rule of the language encourages you to use inheritance relationships to model the business domain semantics rather than as a mechanism for code reuse.

Finally, I note that as an alternative, it is legal for a public class to implement an internal interface. In that scenario there is no danger of accidentally exposing dangerous functionality from the interface to the implementing type because of course the interface is not associated with any functionality in the first place; an interface is logically “abstract”. Implementing an internal interface can be used as a mechanism that allows public components in the same assembly to communicate with each other over “back channels” that are not exposed to the public.

Introduction to Mixins For the C# Developer

If you are a C# developer then you may keep hearing about all the cool kids from Smalltalk, Ruby, Python, Scala using these crazy things called mixins. You may even be a little jealous, not because you want the feature, but because they have a feature with an awesome name like “mixin”. The name is pretty sweet. And in fact, it is fairly self-explanatory since mixins are all about “mixing-in behaviors”.

It is actually an Aspect Oriented Programming (AOP) term which is defined by wikipedia as:

A mixin is a class that provides a certain functionality to be inherited by a subclass, but is not meant to stand alone. Inheriting from a mixin is not a form of specialization but is rather a means to collect functionality. A class may inherit most or all of its functionality by inheriting from one or more mixins through multiple inheritance.

No wonder people are confused! That isn’t exactly clear. So let’s try to clear it up just a tiny bit…

Let’s say that we have a C# class that looks like this:

public class Person{
public string Name { get; set; }
public int Age { get; set; }
}
Looks good. And we have another C# class that looks like this:

public class Car{
public string Make { get; set; }
public string Model { get; set; }
}
Mmmmkay. These classes obviously don’t have anything to do with one another, and hopefully they aren’t in the same object hierarchy. What if we need to do something like, say, serialize to XML? In .NET we would normally decorate the type with a SerializableAttribute and then we would fire up an instance of the XmlSerializer by abstracting it into a method like this:

public static string SerializeToXml(Object obj)
{
var xmlSerializer = new XmlSerializer(obj.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, obj);
}
return Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}
So now when we need to serialize a class, we can just do this:

string xml = XmlHelper.SerializeToXml(person);
That isn’t too bad, but what if we wanted to do something like this:

string xml = person.SerializeToXml();

In C# 3.0 and later we can introduce an extension method to get exactly that behavior:

public static class XmlExtensions
{
public static string SerializeToXml(this Object obj)
{
var xmlSerializer = new XmlSerializer(obj.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, obj);
}
return Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}

}
Now you see what we are doing here, we are creating an extension method on Object so that we can use this on any class. (Well, it really is just a compiler trick, but it “appears” that we have this method on every class) We are performing a very weak form of a mixin, because we now have a similar behavior for any object, even if it doesn’t necessarily have the same inheritance hierarchy. So, why do I say that this is a “weak” mixin? We are sharing behavior across multiple classes, right?

Well, I say it is “weak”, but I really should say that it is not a mixin at all because true mixins have state as well as methods. For example, in the above scenario, let’s say we wanted to cache the result of the serialization so that the next time we called it, we would get the same result? This obviously isn’t something you’d want to do unless you had change tracking, but in the C# extension method, this is impossible. There is no way to associate state with the particular instance.

So how do other languages support this behavior? Well, Ruby supports it with a concept called modules which look very similar to classes and allow those modules to be “included” with classes. Ruby even allows you to apply mixins at runtime, which could be a very powerful, albeit potentially confusing, feature. Python solves the problem by allowing multiple inheritance, so I guess you could say that they aren’t really mixins either. It solves all the same problems as mixins though, but it does add a bit of complexity to the implementation (See the Diamond Problem).

In terms of being similar to C#, the Scala solution is the most interesting. Perhaps because Scala is a statically typed and compile-time bound language (for the most part), and so it has some of the hurdles with implementing mixins that C# would face. In Scala the feature is called “traits” and traits can be applied to both classes and instances during construction.

I’m not going to show you the Scala implementation of traits, but what I am going to do is make up a syntax for C# traits so that we can implement the behavior we want above. So first we are going to have to decide on a keyword for this new construct, and I am going to just use the Scala “trait” keyword. In Scala traits look like classes, because that is essentially what they are. They are classes which are not inherited from, but rather “applied” to another class. In fact, traits can even descend from abstract classes.

Nothing Past This Line is Valid C#!

So our C# trait might look something like this:

trait XmlSerializer{
public string SerializeToXml()
{
var xmlSerializer = new XmlSerializer(this.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, this);
}
return Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}
}
Neato. We could then take this trait, and apply it to our class like using the “with” keyword:

public class Person with XmlSerializer {
public string Name { get; set; }
public int Age { get; set; }
}
Cool, so we have replicated the behavior of the extension method. But now how about that caching? Well, we wouldn’t even need to touch the Person class, we would only have to change the trait:

trait XmlSerializer{
private string xml;

public string SerializeToXml()
{
if (String.IsNullOrEmpty(xml)){
var xmlSerializer = new XmlSerializer(this.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, this);
}
xml = Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}
return xml;
}
}
Nice. Now if you think about it, if traits had the ability to support abstract members, then you wouldn’t need interfaces at all, would you? Well, it just so happens that Scala doesn’t have interfaces, and uses traits in this exact capacity. If you declare a mixin with all abstract methods, then have an interface. It becomes even more powerful when you declare mixins with abstract methods that are used by concrete methods that are also declared within the mixin. Head hurting yet? It is some pretty powerful stuff, and should be used as any powerful tool should be, judiciously.

One final note about Scala traits, which I mentioned earlier, is that they can be applied to instances during construction. This is an interesting behavior because it allows individual instances of classes to have a trait applied. If you think about trying to apply an interface at runtime, then you will realize that any trait that you would apply at runtime would have to contain no abstract methods, otherwise you would have no way to tell if the class implemented the methods that were being applied. This is why Scala only allows traits to be applied at construction of an instance, this way Scala can do checking at compile time to determine if the class implements all of the abstract methods that are needed. So, in C# this syntax would look something like this:

var person = new Person() with XmlSerializable;
And if we needed to pass this into a method, we could do this:

public string DoSomething(Person with XmlSerializable person){
return person.serializeToXml();
}

Checking XML for Semantic Equivalence in C#

I was writing a bit of code for a small project and it was creating some XML that I need to pass to another application. So in order to test this functionality, I needed to compare the XML generated by my API against some hard coded XML. I started off with this:


var expectedXml = @"testvalue";

var actualXml = MyAPI.DoSomeStuff().GenerateXml();

Assert.Equal(expectedXml, actualXml);

But I quickly found out that this wasn’t going to scale. Once the XML got too large, it would carry over too far making my tests read horribly. So, I did this:


var expectedXml = @"

testvalue

";

var actualXml = MyAPI.DoSomeStuff().GenerateXml();

Assert.Equal(expectedXml, actualXml);

The problem was that now the XML wasn’t equivalent. Well, it is semantically equivalent, it just isn’t equivalent for a string comparison. The reason for this is that all of that extra white space and EOL characters screws up the comparison. You might be thinking, well, just strip out white space and EOL characters. It ain’t that easy. What happens when that white space is inside of an xml element. Well, at that point it becomes meaningful for comparison purposes.

So I didn’t want to write my own comparison code (who wants to write that?) so I started hunting around. Since I was already using the .NET 3.5 XElement libraries, I started looking there first. I came across a little method on the XNode class called DeepEquals, and guess what, it does exactly what I want. It compares a node and all child nodes for semantic equivalence. I’m sure that there are probably a few gotchas in there for me, but after preliminary tests, it appears to work perfectly.

I created a little method to do my XML asserts for me:


private void AssertEqualXml(string expectedXml, string actualXml)
{
Assert.IsTrue(XNode.DeepEquals(XElement.Parse(expectedXml), XElement.Parse(actualXml)),
String.Format("{0} \n does not equal \n{1}", actualXml, expectedXml));
}

There you have it. It loads the expected and actual XML into XElements and then calls “DeepEquals” on them. Now I can write my XML to compare is the most readable fashion and not worry about how they are going to compare.

Overloading Dynamic

If you’ve been checking out Visual Studio 2010 (or reading my blog) then you might have noticed the new “dynamic” keyword in C# 4.0. So what is the dynamic keyword? The dynamic keyword allows us to perform late-binding in C#! What is late-binding you ask? Well, that means that operations on the variable aren’t bound at compile time, they are instead bound at runtime. By “bound” I mean that which particular member to invoke is decided while the application is running, not during compilation.

In the past, you might have seen examples of dynamic using a sample like this:


dynamic value = "test string";

value.DoSomethingSuper();

Now obviously the String class does not have a method called “DoSomethingSuper”, but this code will compile. It will blow up at runtime with an error saying that the string class does not contain a definition for “DoSomethingSuper”. If you want a more in depth look at the basic usage of the keyword, see the linked post above.

So Much Dynamicness

What is really interesting is that the dynamic keyword isn’t just for declaring local variables. We can use it for method parameters, return types, and almost anywhere that we can specify a type. Which means that we could actually write something like this (note that you might now want to, you could):


public static dynamic DoSomethingDynamic(dynamic dyn1, dynamic dyn2)
{
return dyn1 + dyn2;
}

Interesting. So this method does basically what we would find in any dynamic language such as Python or Ruby. I can call it like this:


DoSomethingDynamic(3, 5);

Or I can call it like this:


DoSomethingDynamic(3.5, 5.2);

Or even like this:


DoSomethingDynamic("hello", "there");

And guess what, it works like you would expect. The first two calls are added, and the third call is concatenated together. It truly does allow you to have fully dynamic behavior in C#. We can even support fully dynamic classes with (ala method_missing) using DynamicObject.

An Overload Of Dynamic

But there is one little wrinkle that C# has to deal with that traditional dynamic languages like Ruby and Python don’t have to deal with. And that wrinkle is method overloading. Think about that, in a dynamic language, you don’t specify types on method parameters, so there is nothing to overload. The only thing that methods signatures are based off of is the number of parameters.

But C# still has types. And it has a dynamic type. Hmmmmmmm. That is interesting. So what happens when we declare the above method, but then we declare something like this?


public static dynamic DoSomethingDynamic(string dyn1, dynamic dyn2)
{
return "nope!";
}

Interesting. Method overloading with dynamic. Suddenly we are in a situation where we have to factor in dynamic as part of the process. So in the above, what happens? Well, thankfully they implemented it in the most obvious way. Types take precedence over dynamic. So if we have the above method, and call it like this:


Console.WriteLine(DoSomethingDynamic("hello", 5));

Then instead of picking the (dynamic, dynamic) overload, the C# compiler picks the overload that matches the most types. But what happens if we implemented these methods:


public static dynamic DoSomethingDynamic(string dyn1, dynamic dyn2)
{
return "string first!";
}

public static dynamic DoSomethingDynamic(dynamic dyn1, string dyn2)
{
return "string second!";
}

Looks like we’ve got a bit of a conundrum. If we call this method with (string, string) how would we know which method to call? Well, we can’t, and the C# compiler just throws its hands up and says “The call is ambiguous between the following methods or properties” Well that stinks.

The easiest solution would just be to implement an overload that implemented (string,string) and then you couldn’t find yourself in this situation. So, you may be thinking, well, wouldn’t this always be caught by the compiler?

Can The Compiler Save Us?

Well, let’s consider the situation where you have an overload with (dynamic, dynamic), (string, dynamic), and (dynamic,string). Then we have some code that looks like this:


dynamic val1 = "test";
dynamic val2 = "test2";
Console.WriteLine(DoSomethingDynamic(val1, val2));


Ahhhhhhhhhh! Brain teaser. What do you think will happen? Well, we are dealing with all dynamic variables here, so this is going to compile. And when we run it, do you think that the method with the (dynamic, dynamic) signature will be called? That might make sense at first, but consider that dynamic variables perform method overload resolution at runtime. So, those variables are dynamically typed, but they are strings.

So what happens at runtime is that we determine that those variables are strings, and we try to find out what method overloads are available… and we find the ambiguity… at runtime! Here is the proof:

Did you happen to notice the references to “object” in the method signatures? Where did that come from? Well, it just so happens that dynamic isn’t really a type. It is just an object with a bit of extra behavior added during compilation.

Dynamic Is Special

So, if we were to look at the reflected code for this we would see that the two variable declarations look like this:


object val1 = "test";
object val2 = "test2";

Okay, but what about that special behavior we were talking about? Well that comes in where these variables are used. In this case we have two method calls. The first one is to DoSomethingDynamic and the next is to Console.WriteLine. In order to invoke these methods with dynamic variables we need to create these things called “call sites”. Call sites are merely objects that represent a call to a method which is created at runtime during the first invoke. These classes are what allow the method resolution and caching on each call to occur at runtime. They look something like this (truncated for brevity):


private static void Main(string[] args)
{
object val1 = "test";
object val2 = "test2";
if (o__SiteContainer0.p__Site1 == null)
{
o__SiteContainer0.p__Site1 =
CallSite<Action>.Create(...);
}
if (o__SiteContainer0.p__Site2 == null)
{
o__SiteContainer0.p__Site2 =
CallSite<Func>
.Create(...);
}
o__SiteContainer0.p__Site1.Target.Invoke(...);
}

The importance of this is that when you compile a method which has dynamic parameters but is being passes statically typed variables, there doesn’t really need to be any special behavior when the method is invoked. It is inside of the method where we are dealing with these dynamic paramters that we will start seeing CallSites get created. So, the method will look like this:


[return: Dynamic]
public static object DoSomethingDynamic([Dynamic] object dyn1, [Dynamic] object dyn2)
{
if (o__SiteContainer4.p__Site5 == null)
{
o__SiteContainer4.p__Site5 =
CallSite<Func>.Create(...);
}
return o__SiteContainer4
.p__Site5.Target.Invoke(...);
}

Hmmm, so when we are doing method resolution the (dynamic, dynamic) method just looks like (object, object)! So does that mean if we did this:


dynamic val1 = new Object();
dynamic val2 = new Object();
Console.WriteLine(DoSomethingDynamic(val1, val2));

That it would then call the method with the (dynamic, dynamic) signature? Well, yes it does. 🙂 Phew. And with that, we can now get a better picture of the implications of method overloading and our use of dynamic.

Reflecting

So, what does all of this mean? Why is it important? Well, it depends on how you look at it. Are you going to have to deal with method overloading involving dynamic very often? Probably not. Is it interesting to see how much thought and effort it takes in order to design a feature like this? You bet it is.

Easy And Safe Model Binding In ASP.NET MVC

A little over a year ago (wow, it seems like only yesterday), I made a post called Think Before You Bind. In this post, I presented to you exactly why you want to make sure that when you are doing automatic binding to models in ASP.NET MVC, you need to absolutely make sure that you are only binding to the properties that you expect. The reason for this, is that in ASP.NET MVC you really have no way of telling what was supposed to be posted to the server, and what wasn’t, so someone could tamper with, or create fake, post data and overwrite properties that you weren’t expecting to be changed.

This isn’t something unexpected, but it is definitely not something that Web Forms developers have to really consider when building their solutions. On the flip side though, ASP.NET tracks what fields are supposed to be on the form which ties you into a fairly static number of fields, unless you want to hack your way around that model. And I think many of us know how ugly that can get…

So, the basic problem is that ASP.NET MVC doesn’t care what you render to the user, and it doesn’t care what gets posted back. If the post value name and the property name match, then ASP.NET MVC is going to map the value onto the property, even if you never rendered a field by that name. It is a concern, but one that is easily avoided by using a very simple approach. And that approach is to use models that are specific to your views, usually to referred to as View Models (not to be confused with the MVVM – Model-View-ViewModel pattern). View models are useful for a number of reasons, but in this case we are leveraging them so that you can expose a surface with only the properties that you want bound,
And that approach works well, but really only if your objects are ever being used in one context. What happens if you need to edit an object as both an end-user and an administrator, certainly you don’t want to allow users to edit the same properties as an admin. Well, if you used the same view model, then you would be setting yourself up for a potential security hole, since the end user could (as we explained earlier) add some bad data into the post, and update the view model in ways that you didn’t expect. So, how do you fix this?

Well, one solution is to create two different view models, one for the end user, one for the admin. But that begs the question, what if we need to have even more contexts? Or what if we had interfaces which edited pieces of a larger object? Do we just keep introducing more and more view models? You could, but that would be a lot of work…. if only we had some way to create a view on top of an object which would expose only the properties that we wanted to see.
For the astute, you’ll notice that I just explained interfaces, and last time I checked, C# has those. Thanks to a comment from my good friend Simone Chiaretta (and others), the idea of using a single view model with different interfaces was proposed. Then instead of using the class type to bind, you could just use the interface! Something like this is the result:


public class PersonViewModel: IEditPersonAsUser
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Role { get; set; }
public string EmailAddress { get; set; }
}

public interface IEditPersonAsUser
{
string FirstName { get; set; }
string LastName { get; set; }
string EmailAddress { get; set; }
}

internal interface IEditPersonAsAdmin
{
string FirstName { get; set; }
string LastName { get; set; }
string EmailAddress { get; set; }
string Role { get; set; }
}

The downside to this is that you can no longer use the automatic model binding that ASP.NET MVC gives you. The reason for this is fairly self explanatory, how would ASP.NET MVC know which model to bind for this method?


public ActionResult Edit(IEditPersonAsUser personModel)

The answer is, “it wouldn’t”. You could theoretically put that interface on any number of implementations, and so there is no quick and reliable way to pick out the right implementer. (Well, when I say “no way”, you could look for a single implementer, and if you find more than one, you just throw an exception) This is easy enough to work around though. You can write code to perform the automatic binding manually (this is just screaming for a custom model binder!):


var personViewModel = new PersonViewModel();
UpdateModel(personViewModel);

Notice here that we are telling the UpdateModel method to bind the view model using the IEditPersonAsUser interface. This works well, but it would still be better if we could avoid those few lines of code over and over. We could put a small method like this in a base controller, and make it even easier

var personViewModel = Bind()

A tiny bit easier and cleaner. Now you can create view models for specific entities, and then you can reuse them in multiple scenarios without having to worry a bunch about white lists, black lists, or creating a ton of different view model classes.

As I mentioned earlier though, this is just crying out for a custom model binder, we could set one up so that when it sees an interface type it simply searches for the one type which implements that interface and then throws an exception if it finds more than one. Since our whole purpose here is to use an interface to constrain a single view model to different views of the same model, that shouldn’t hurt us at all. Maybe I’ll implement that for you in a future post.

I hope that you found this post informative and useful, if you have any ideas for things that could be improved or modified, please post me a comment!

Resetting local Ports and Devices from .NET

Currently, I am working on C# applications that communicate with several external devices connected via USB ports. In rare cases the ports just stop working correctly, so we needed a programmatic approach to reset them. Doing this by using C# is not trivial. The solution we implemented uses the following components:
– C#’s SerialPort class
– WMI (Windows Management Instrumentation)
– P/Invoke with calls to the Windows Setup API

Accessing a port using C#
Using a certain port with C# is rather simple. The .NET framework provides the SerialPort class as a wrapper to the underlying Win32 port API. You can simply create a new instance of the class by providing the name of an existing port. On the instance object you have methods to open or close the port or to read and write data from its underlying stream. It also provides events to notify any listeners when data or errors were received. Here is a small example on how to open a port by first checking if the given port name exists in the operating system:


public SerialPort OpenPort(string portName)
{
if (!IsPortAvailable(portName))
{
return null;
}

var port = new SerialPort(portName);

try
{
port.Open();
}
catch (UnauthorizedAccessException) { ... }
catch (IOException) { ... }
catch (ArgumentException) { ... }
}

private bool IsPortAvailable(string portName)
{
// Retrieve the list of ports currently mounted by
// the operating system (sorted by name)
string[] ports = SerialPort.GetPortNames();
if (ports != null && ports.Length > 0)
{
return ports.Where(new Func((s) =>
{
return s.Equals(portName,
StringComparison.InvariantCultureIgnoreCase);
})).Count() == 1;
}
return false;
}

In rare cases the Open method of the port threw an IO Exception in our applications. This situation was not reproducible but also not acceptable. We noticed that after deactivating the port in the device manager and reactivating it, everything was working fine again. So we searched for a solution to do exactly the same thing from code.

Enable/Disable Devices
First of all a function or set of functions was needed to disable and enable a certain port. This can not directly be done from C# and needs some P/Invoke calls to the Win32 API. Luckily others had similar problems and so I found a very good solution at http://www.eggheadcafe.com/community/csharp/2/10315145/enabledisable-comm-port-by-programming.aspx. It uses the device installation functions from the Win32 Setup API. All you need is to retrieve the class GUID for the device set and the instance Id for the specific device you want to disable or enable. Just have a look at the link for the code or download the accompanying code for this blog post.

Retrieving the Ports Instance Id
Last thing to do is to acquire the correct instance Id for our port. We need a method that takes in the port name and retrieves the corresponding instance Id. For the exact definition of an instance Id in windows terms have a look at http://msdn.microsoft.com/en-us/library/windows/hardware/ff541224(v=vs.85).aspx. In our case we’d like to use the Plug ‘n’ Play device Id that can also be seen in the properties window of a device inside the device manager. For this purpose we are going to use WMI. If you need information about WMI have a look at http://msdn.microsoft.com/en-us/library/windows/desktop/aa394582(v=vs.85).aspx. WMI provides the Win32_SerialPort class that can be used to iterate over all mounted ports of the operating system. Two properties of the Win32_SerialPort class are important for us: The DeviceID property which represents the port name and the PNPDeviceID which gives the Plug ‘n’ Play instance Id. Notice while this works perfectly for Plug ‘n’ Play devices it may not work for other kinds of devices.

ManagementObjectSearcher searcher =
new ManagementObjectSearcher("select * from Win32_SerialPort");
foreach (ManagementObject port in searcher.Get())
{
if (port["DeviceID"].ToString().Equals(portName))
{
instanceId = port["PNPDeviceID"].ToString();
break;
}
}

If we found the appropriate instance Id we can use it together with the Win32 Setup API to retrieve a device info set and the corresponding device info data. If a device info for the instance Id was found we can use its class GUID and the instance Id to disable and enable the device. The following method is used to reset a port with a given instance Id:

public static bool TryResetPortByInstanceId(string instanceId)
{
SafeDeviceInfoSetHandle diSetHandle = null;
if (!String.IsNullOrEmpty(instanceId))
{
try
{
Guid[] guidArray = GetGuidFromName("Ports");

//Get the handle to a device information set for all
//devices matching classGuid that are present on the
//system.
diSetHandle = NativeMethods.SetupDiGetClassDevs(
ref guidArray[0],
null,
IntPtr.Zero,
SetupDiGetClassDevsFlags.DeviceInterface);

//Get the device information data for each matching device.
DeviceInfoData[] diData = GetDeviceInfoData(diSetHandle);

//Try to find the object with the same instance Id.
foreach (var infoData in diData)
{
var instanceIds =
GetInstanceIdsFromClassGuid(infoData.ClassGuid);
foreach (var id in instanceIds)
{
if (id.Equals(instanceId))
{
//disable port
SetDeviceEnabled(infoData.ClassGuid, id, false);
//wait some milliseconds
Thread.Sleep(200);
//enable port
SetDeviceEnabled(infoData.ClassGuid, id, true);
return true;
}
}
}
}
catch (Exception)
{
return false;
}
finally
{
if (diSetHandle != null)
{
if (diSetHandle.IsClosed == false)
{
diSetHandle.Close();
}
diSetHandle.Dispose();
}
}
}
return false;
}

With the code set up so far we can now easily reset a port in case we’re getting an IO Exception while trying to open the port. We just have to call the method inside our exception handler and try to open the port again. That solved our initial problem. Be aware that it may take some time to disable and re-enable the port. It may be a good idea to do it inside a separate thread if you’re working inside an application using a GUI.

C# Operator Overloading for Improved Code Readability

In this blog post I’m going to show you some basic stuff concerning operator overloading in C#. Operators can be very useful in situations where you want to write elegant and easy to understand code by using standard C# syntax. For this post I will use a simple API that creates Crystal Reports selection formulas out of a given DataSet and some operators.

Introduction: Crystal Syntax
In order to understand why operators perfectly fit this usage scenario we need to have a short glimpse at the Crystal syntax for selection formulas. Selection formulas enable you to filter the displayed data inside a Crystal Reports report according to the underlying data. Often you use a dataset and bind it to a report which in turn gets shown in the Crystal Reports Viewer. Formulas are given as pure strings, so if you want to use the programmatically you have to learn Crystal Syntax which is not really fun. Let’s have a look at the basic concepts:

Column References
References to Columns of the underlying data source are written as {TableName.ColumnName}. So if you have a dataset with a table called Customer and a Column called name, you would write {Customer.Name}.

Comparison Operators
You can use comparison operators in order to compare a column value to any given value. Beneath the standard comparison operators like ‘less than’ or ‘greater than’ or ‘equals’ you also have some string operators like ‘starts with’ or ‘like’. So you could write something like the following: {Customer.Name} startswith ‘A’.

Boolean Operators
Simple Expressions like the one shown above can be concatenated by using Boolean operators. Crystal Reports supports Not, And, Or, Xor, Eqv and Imp. Here’s an example: {Customer.Name} startsWith ‘A’ and {Customer.Age} < 18.

I think that’s enough for the moment to get started. If you want to see more operators or want to play around with selection formulas use the formula editor of the Crystal Reports designer in Visual Studio.

Crystal Syntax Converter
I don’t feel comfortable writing magic strings in code without getting them checked by some instance. Selection formulas can become very complex and error prone. So I searched for a solution to write down C# code and get the string generated out of it. My first approach was some kind of fluent API, so I could write something like the following:

Customer.NameColumn.StartsWith(“A”).And(Customer.AgeColumn.LessThan(18))

The result of this expression is equivalent to the Boolean Operators example above. But it’s much longer and more difficult to read. So I dropped that approach and turned to operator overloading. As stated at the beginning, operators are very powerful when used in appropriate situations. For our example we need to overload binary operators in order to get boolean operators and of course the comparison operators.

Let’s start with the very simplest expression. We want to use a column of our data source and compare it to any other value (e.g. a string or number). I start by writing a simple extension method, because I don’t want to write down DataSet.Table.Column every time and I need a custom object where I can define the overloaded operators. The custom class will be called CrystalColumn and the extension method looks like this:

public static CrystalColumn ToCrystalColumn(this DataColumn dc)
{
var sb = new StringBuilder("{");
sb.Append(dc.Table.TableName);
sb.Append(".");
sb.Append(dc.ColumnName);
sb.Append("}");

return new CrystalColumn(sb.ToString());
}

The method simply takes in a DataColumn and creates a string representation of it according to the Crystal Syntax. So now we a are able to write:

var firstname = dataSet.Customer
.FirstNameColumn.ToCrystalColumn();

The next step would be to overload the needed operators on the CrystalColumn class. Here’s the example of the ‘less than’ operator:

public static SingleExpression operator <(CrystalColumn column, int value)
{
return new SingleExpression(column.ToString() + " < " + value);
}

The method needs to be static and the first parameter depicts the object on which the operator is defined. The provided objects are converted to the according string representation and passed to a new instance of the SingleExpression class. This class encapsulates a simple expression. With the code so far we can write the following:

var age = dataSet.Customer.AgeColumn.ToCrystalColumn();
var singleExpression = age 1)
{
startsWithExpression = "[";
for (int i = 0; i < s.Length; i++)
{
startsWithExpression += "'" + s[i] + "'";
if (i < s.Length - 1)
startsWithExpression += ",";
}
startsWithExpression += "]";
}

var expression = columnExpression + " startswith " + startsWithExpression;
return new SingleExpression(expression);
}

The ’starts with’ operator in Crystal Reports acceppts an array of strings. That’s why the method acceppts a string array as parameter. Again a string represenation of the provided objects is created and passed to a new instance of the SingleExpression class. This method allows us to write:

var firstname = dataSet.Customer.FirstNameColumn.ToCrystalColumn();
var singleExpression = firstname.StartsWith("Mike", "John");

What’s missing now is the ability to combine two or more simple expressions by employing boolean operators to form complex expressions. In order to achieve this, we can now overload the needed binary operators on the SingleExpression class. For example the ‘And’ operator can be overloaded as follows:

public static Expression operator &(SingleExpression ex1, SingleExpression ex2)
{
return ex1.And(ex2);
}

private Expression And(SingleExpression singleExpression)
{
return new Expression(this,
singleExpression, LogicalOperator.and);
}

The operator overload takes a second SingleExpression instance and calls the private ‘And’ method. This method in turn creates a new Expression instance that encapsulates two or more simple expressions and finally generates the needed string representation. As another example I will show you the implementation of the ‘Not’ operator:

public static SingleExpression operator !(SingleExpression ex)
{
return new SingleExpression(
"not (" + ex.ToString() + ")");
}

The method takes only one argument, because the ’Not’ operator is an unary operator acting on a single argument. According to this only a simple expression is returned.

Putting it all together
With the classes and method we have so far, we can now write our example expression using operators:

var firstname = dataSet.Customer.FirstNameColumn.ToCrystalColumn();
var age = dataSet.Customer.AgeColumn.ToCrystalColumn();

var expression = firstname.StartsWith("A", "B") & age < 18;

When you compare that statement to the first approach, I think it’s easy to see that the readability improved a lot. We can also compare it to the resulting formula in Crystal Syntax and see that it is as nearly as short as the formula:

{Customer.FirstName} startswith [‘A’,’B’] and {Customer.Age} < 18

The Expression class overwrites its ToString method in order to write out the complete selection formula, so you can now easily pass it to the report viewer:

crystalViewer.SelectionFormula = expression.ToString();

Summary
This was only a short example of operator overloading in C# but I hope you noticed the great benefit of using them. If you need more information about C# operators just search the internet. It’s not a new concept. Some words of wisdom: Operators are not reasonable in every situation. Additionally when you use them be sure to use them the correct way and implement only operations that act the way a user would expect them to do. If you for example implement the ‘+’ operator for a custom number class, don’t return the difference. People are anticipating the results of common operations and get confused if something strange happens.