What is the difference between Grid view,Data list, and repeater?

Grid view and data grid by default display all the data in tabular format i.e. in table and rows. Developer has no control to change the table data display of datagrid.

Data list also displays data in a table but gives some flexibility in terms of displaying data row wise and column wise using the repeat direction property.

Repeater control is highly customizable. It does not display data in table by default. So you can customize from scratch the way you want to display data.

Leverage the C# Preprocessor

Like other languages in the C-family, C# supports a set of ‘preprocessor’ directives, most notably #define, #if and #endif (technically, csc.exe does not literally have a preprocessor as these symbols are resolved at the lexical analysis phase, but no need to split hairs…).

The #define directive allows you to set up custom symbols which control code compilation. Be very aware that unlike C(++), C#’s #define does not allow you to create macro-like code. Once a symbol is defined, the #if and #endif maybe used to test for said symbol. By way of a common example:

#define DEBUG
using System;

public class MyClass
{
public static void Main()
{
#if DEBUG
Console.WriteLine(“DEBUG symbol is defined!”);
#endif
}
}
When you use the #define directive, the symbol is only realized within the defining file. However if you wish to define project wide symbols, simply access your project’s property page and navigate to the “Configuration Properties | Build” node and edit the “Conditional Compilation Constants” edit field. Finally, if you wish to disable a constant for a given file, you may make use of the #undef symbol.

Why Doesn’t C# Implement “Top Level” Methods?

C# requires that every method be in some class, even if it is a static method in a static class in the global namespace. Other languages allow “top level” functions. A recent stackoverflow post asks why that is.

I am asked “why doesn’t C# implement feature X?” all the time. The answer is always the same: because no one ever designed, specified, implemented, tested, documented and shipped that feature. All six of those things are necessary to make a feature happen. All of them cost huge amounts of time, effort and money. Features are not cheap, and we try very hard to make sure that we are only shipping those features which give the best possible benefits to our users given our constrained time, effort and money budgets.

I understand that such a general answer probably does not address the specific question.

In this particular case, the clear user benefit was in the past not large enough to justify the complications to the language which would ensue. By restricting how different language entities nest inside each other we (1) restrict legal programs to be in a common, easily understood style, and (2) make it possible to define “identifier lookup” rules which are comprehensible, specifiable, implementable, testable and documentable.

By restricting method bodies to always be inside a struct or class, we make it easier to reason about the meaning of an unqualified identifier used in an invocation context; such a thing is always an invocable member of the current type (or a base type).

Now, JScript.NET has this feature. (And in fact, JScript.NET goes even further; you can have program statements “at the top level” too.) A reasonable question is “why is this feature good for JScript but bad for C#?”

First off, I reject the premise that the feature is “bad” for C#. The feature might well be good for C#, just not good enough compared to its costs (and to the opportunity cost of doing that feature instead of a more valuable feature.) The feature might become good enough for C# if its costs are lowered, or if the compelling benefit to customers becomes higher.

Second, the question assumes that the feature is good for JScript.NET. Why is it good for JScript.NET?

It’s good for JScript.NET because JScript.NET was designed to be a “scripty” language as well as a “large-scale development” language. “JScript classic”‘s original design as a scripting language requires that “a one-line program actually be one line”. If your intention is to make a language that allows for rapid development of short, simple scripts by novice developers then you want to minimize the amount of “ritual incantations” that must happen in every program. In JScript you do not want to have to start with a bunch of using clauses and define a class and then put stuff in the class and have a Main routine and blah blah blah, all this ritual just to get Hello World running.

C# was designed to be a large-scale application development language geared towards pro devs from day one; it was never intended to be a scripting language. It’s design therefore encourages enforcing the immediate organization of even small chunks of code into components. C# is a component-oriented language. We therefore want to encourage programming in a component-based style and discourage features that work against that style.

This is changing. “REPL” languages like F#, long popular in academia, are increasing in popularity in industry. There’s a renewed interest in “scripty” application programmability via tools like Visual Studio Tools for Applications. These forces cause us to re-evaluate whether “a one line program is one line” is a sensible goal for hypothetical future versions of C#. Hitherto it has been an explicit non-goal of the language design.

(As always, whenever I discuss the hypothetical “next version of C#”, keep in mind that we have not announced any next version, that it might never happen, and that it is utterly premature to think about feature sets or schedules. All speculation about future versions of unannounced products should be taken as “for entertainment purposes only” musings, not as promises about future offerings.)

We are therefore considering adding this feature to a hypothetical future version of C#, in order to better support “scripty” scenarios and REPL evaluation. When the existence of powerful new tools is predicated upon the existence of language features, that is points towards getting the language features done.

Why is deriving a public class from an internal class illegal?

In C# it is illegal to declare a class D whose base class B is in any way less accessible than D. I’m occasionally asked why that is. There are a number of reasons; today I’ll start with a very specific scenario and then talk about a general philosophy.

Suppose you and your coworker Alice are developing the code for assembly Foo, which you intend to be fully trusted by its users. Alice writes:

public class B
{
public void Dangerous() {…}
}

And you write

public class D : B
{
… other stuff …
}

Later, Alice gets a security review from Bob, who points out that method Dangerous could be used as a component of an attack by partially-trusted code, and who further points out that customer scenarios do not actually require B to be used directly by customers in the first place; B is actually only being used as an implementation detail of other classes. So in keeping with the principle of least privilege, Alice changes B to:

internal class B
{
public void Dangerous() {…}
}

Alice need not change the accessibility of Dangerous, because of course “public” means “public to the people who can see the class in the first place”.

So now what should happen when Alice recompiles before she checks in this change? The C# compiler does not know if you, the author of class D, intended method Dangerous to be accessible by a user of public class D. On the one hand, it is a public method of a base class, and so it seems like it should be accessible. On the other hand, the fact that B is internal is evidence that Dangerous is supposed to be inaccessible outside the assembly. A basic design principle of C# is that when the intention is unclear, the compiler brings this fact to your attention by failing. The compiler is identifying yet another form of the Brittle Base Class Failure, which long-time readers know has shown up in numerous places in the design of C#.

Rather than simply making this change and hoping for the best, you and Alice need to sit down and talk about whether B really is a sensible base class of D; it seems plausible that either (1) D ought to be internal also, or (2) D ought to favour composition over inheritance. Which brings us to my more general point:

More generally: the inheritance mechanism is, simply the fact that all heritable members of the base type are also members of the derived type. But the inheritance relationship semantics are intended to model the “is a kind of” relationship. It seems reasonable that if D is a kind of B, and D is accessible at a location, then B ought to be accessible at that location as well. It seems strange that you could only use the fact that “a Giraffe is a kind of Animal” at specific locations.

In short, this rule of the language encourages you to use inheritance relationships to model the business domain semantics rather than as a mechanism for code reuse.

Finally, I note that as an alternative, it is legal for a public class to implement an internal interface. In that scenario there is no danger of accidentally exposing dangerous functionality from the interface to the implementing type because of course the interface is not associated with any functionality in the first place; an interface is logically “abstract”. Implementing an internal interface can be used as a mechanism that allows public components in the same assembly to communicate with each other over “back channels” that are not exposed to the public.

Introduction to Mixins For the C# Developer

If you are a C# developer then you may keep hearing about all the cool kids from Smalltalk, Ruby, Python, Scala using these crazy things called mixins. You may even be a little jealous, not because you want the feature, but because they have a feature with an awesome name like “mixin”. The name is pretty sweet. And in fact, it is fairly self-explanatory since mixins are all about “mixing-in behaviors”.

It is actually an Aspect Oriented Programming (AOP) term which is defined by wikipedia as:

A mixin is a class that provides a certain functionality to be inherited by a subclass, but is not meant to stand alone. Inheriting from a mixin is not a form of specialization but is rather a means to collect functionality. A class may inherit most or all of its functionality by inheriting from one or more mixins through multiple inheritance.

No wonder people are confused! That isn’t exactly clear. So let’s try to clear it up just a tiny bit…

Let’s say that we have a C# class that looks like this:

public class Person{
public string Name { get; set; }
public int Age { get; set; }
}
Looks good. And we have another C# class that looks like this:

public class Car{
public string Make { get; set; }
public string Model { get; set; }
}
Mmmmkay. These classes obviously don’t have anything to do with one another, and hopefully they aren’t in the same object hierarchy. What if we need to do something like, say, serialize to XML? In .NET we would normally decorate the type with a SerializableAttribute and then we would fire up an instance of the XmlSerializer by abstracting it into a method like this:

public static string SerializeToXml(Object obj)
{
var xmlSerializer = new XmlSerializer(obj.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, obj);
}
return Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}
So now when we need to serialize a class, we can just do this:

string xml = XmlHelper.SerializeToXml(person);
That isn’t too bad, but what if we wanted to do something like this:

string xml = person.SerializeToXml();

In C# 3.0 and later we can introduce an extension method to get exactly that behavior:

public static class XmlExtensions
{
public static string SerializeToXml(this Object obj)
{
var xmlSerializer = new XmlSerializer(obj.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, obj);
}
return Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}

}
Now you see what we are doing here, we are creating an extension method on Object so that we can use this on any class. (Well, it really is just a compiler trick, but it “appears” that we have this method on every class) We are performing a very weak form of a mixin, because we now have a similar behavior for any object, even if it doesn’t necessarily have the same inheritance hierarchy. So, why do I say that this is a “weak” mixin? We are sharing behavior across multiple classes, right?

Well, I say it is “weak”, but I really should say that it is not a mixin at all because true mixins have state as well as methods. For example, in the above scenario, let’s say we wanted to cache the result of the serialization so that the next time we called it, we would get the same result? This obviously isn’t something you’d want to do unless you had change tracking, but in the C# extension method, this is impossible. There is no way to associate state with the particular instance.

So how do other languages support this behavior? Well, Ruby supports it with a concept called modules which look very similar to classes and allow those modules to be “included” with classes. Ruby even allows you to apply mixins at runtime, which could be a very powerful, albeit potentially confusing, feature. Python solves the problem by allowing multiple inheritance, so I guess you could say that they aren’t really mixins either. It solves all the same problems as mixins though, but it does add a bit of complexity to the implementation (See the Diamond Problem).

In terms of being similar to C#, the Scala solution is the most interesting. Perhaps because Scala is a statically typed and compile-time bound language (for the most part), and so it has some of the hurdles with implementing mixins that C# would face. In Scala the feature is called “traits” and traits can be applied to both classes and instances during construction.

I’m not going to show you the Scala implementation of traits, but what I am going to do is make up a syntax for C# traits so that we can implement the behavior we want above. So first we are going to have to decide on a keyword for this new construct, and I am going to just use the Scala “trait” keyword. In Scala traits look like classes, because that is essentially what they are. They are classes which are not inherited from, but rather “applied” to another class. In fact, traits can even descend from abstract classes.

Nothing Past This Line is Valid C#!

So our C# trait might look something like this:

trait XmlSerializer{
public string SerializeToXml()
{
var xmlSerializer = new XmlSerializer(this.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, this);
}
return Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}
}
Neato. We could then take this trait, and apply it to our class like using the “with” keyword:

public class Person with XmlSerializer {
public string Name { get; set; }
public int Age { get; set; }
}
Cool, so we have replicated the behavior of the extension method. But now how about that caching? Well, we wouldn’t even need to touch the Person class, we would only have to change the trait:

trait XmlSerializer{
private string xml;

public string SerializeToXml()
{
if (String.IsNullOrEmpty(xml)){
var xmlSerializer = new XmlSerializer(this.GetType());
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = new XmlTextWriter(memoryStream, new UTF8Encoding(false)))
{
xmlSerializer.Serialize(xmlWriter, this);
}
xml = Encoding.UTF8.GetString(memoryStream.GetBuffer());
}
}
return xml;
}
}
Nice. Now if you think about it, if traits had the ability to support abstract members, then you wouldn’t need interfaces at all, would you? Well, it just so happens that Scala doesn’t have interfaces, and uses traits in this exact capacity. If you declare a mixin with all abstract methods, then have an interface. It becomes even more powerful when you declare mixins with abstract methods that are used by concrete methods that are also declared within the mixin. Head hurting yet? It is some pretty powerful stuff, and should be used as any powerful tool should be, judiciously.

One final note about Scala traits, which I mentioned earlier, is that they can be applied to instances during construction. This is an interesting behavior because it allows individual instances of classes to have a trait applied. If you think about trying to apply an interface at runtime, then you will realize that any trait that you would apply at runtime would have to contain no abstract methods, otherwise you would have no way to tell if the class implemented the methods that were being applied. This is why Scala only allows traits to be applied at construction of an instance, this way Scala can do checking at compile time to determine if the class implements all of the abstract methods that are needed. So, in C# this syntax would look something like this:

var person = new Person() with XmlSerializable;
And if we needed to pass this into a method, we could do this:

public string DoSomething(Person with XmlSerializable person){
return person.serializeToXml();
}

Checking XML for Semantic Equivalence in C#

I was writing a bit of code for a small project and it was creating some XML that I need to pass to another application. So in order to test this functionality, I needed to compare the XML generated by my API against some hard coded XML. I started off with this:


var expectedXml = @"testvalue";

var actualXml = MyAPI.DoSomeStuff().GenerateXml();

Assert.Equal(expectedXml, actualXml);

But I quickly found out that this wasn’t going to scale. Once the XML got too large, it would carry over too far making my tests read horribly. So, I did this:


var expectedXml = @"

testvalue

";

var actualXml = MyAPI.DoSomeStuff().GenerateXml();

Assert.Equal(expectedXml, actualXml);

The problem was that now the XML wasn’t equivalent. Well, it is semantically equivalent, it just isn’t equivalent for a string comparison. The reason for this is that all of that extra white space and EOL characters screws up the comparison. You might be thinking, well, just strip out white space and EOL characters. It ain’t that easy. What happens when that white space is inside of an xml element. Well, at that point it becomes meaningful for comparison purposes.

So I didn’t want to write my own comparison code (who wants to write that?) so I started hunting around. Since I was already using the .NET 3.5 XElement libraries, I started looking there first. I came across a little method on the XNode class called DeepEquals, and guess what, it does exactly what I want. It compares a node and all child nodes for semantic equivalence. I’m sure that there are probably a few gotchas in there for me, but after preliminary tests, it appears to work perfectly.

I created a little method to do my XML asserts for me:


private void AssertEqualXml(string expectedXml, string actualXml)
{
Assert.IsTrue(XNode.DeepEquals(XElement.Parse(expectedXml), XElement.Parse(actualXml)),
String.Format("{0} \n does not equal \n{1}", actualXml, expectedXml));
}

There you have it. It loads the expected and actual XML into XElements and then calls “DeepEquals” on them. Now I can write my XML to compare is the most readable fashion and not worry about how they are going to compare.

Overloading Dynamic

If you’ve been checking out Visual Studio 2010 (or reading my blog) then you might have noticed the new “dynamic” keyword in C# 4.0. So what is the dynamic keyword? The dynamic keyword allows us to perform late-binding in C#! What is late-binding you ask? Well, that means that operations on the variable aren’t bound at compile time, they are instead bound at runtime. By “bound” I mean that which particular member to invoke is decided while the application is running, not during compilation.

In the past, you might have seen examples of dynamic using a sample like this:


dynamic value = "test string";

value.DoSomethingSuper();

Now obviously the String class does not have a method called “DoSomethingSuper”, but this code will compile. It will blow up at runtime with an error saying that the string class does not contain a definition for “DoSomethingSuper”. If you want a more in depth look at the basic usage of the keyword, see the linked post above.

So Much Dynamicness

What is really interesting is that the dynamic keyword isn’t just for declaring local variables. We can use it for method parameters, return types, and almost anywhere that we can specify a type. Which means that we could actually write something like this (note that you might now want to, you could):


public static dynamic DoSomethingDynamic(dynamic dyn1, dynamic dyn2)
{
return dyn1 + dyn2;
}

Interesting. So this method does basically what we would find in any dynamic language such as Python or Ruby. I can call it like this:


DoSomethingDynamic(3, 5);

Or I can call it like this:


DoSomethingDynamic(3.5, 5.2);

Or even like this:


DoSomethingDynamic("hello", "there");

And guess what, it works like you would expect. The first two calls are added, and the third call is concatenated together. It truly does allow you to have fully dynamic behavior in C#. We can even support fully dynamic classes with (ala method_missing) using DynamicObject.

An Overload Of Dynamic

But there is one little wrinkle that C# has to deal with that traditional dynamic languages like Ruby and Python don’t have to deal with. And that wrinkle is method overloading. Think about that, in a dynamic language, you don’t specify types on method parameters, so there is nothing to overload. The only thing that methods signatures are based off of is the number of parameters.

But C# still has types. And it has a dynamic type. Hmmmmmmm. That is interesting. So what happens when we declare the above method, but then we declare something like this?


public static dynamic DoSomethingDynamic(string dyn1, dynamic dyn2)
{
return "nope!";
}

Interesting. Method overloading with dynamic. Suddenly we are in a situation where we have to factor in dynamic as part of the process. So in the above, what happens? Well, thankfully they implemented it in the most obvious way. Types take precedence over dynamic. So if we have the above method, and call it like this:


Console.WriteLine(DoSomethingDynamic("hello", 5));

Then instead of picking the (dynamic, dynamic) overload, the C# compiler picks the overload that matches the most types. But what happens if we implemented these methods:


public static dynamic DoSomethingDynamic(string dyn1, dynamic dyn2)
{
return "string first!";
}

public static dynamic DoSomethingDynamic(dynamic dyn1, string dyn2)
{
return "string second!";
}

Looks like we’ve got a bit of a conundrum. If we call this method with (string, string) how would we know which method to call? Well, we can’t, and the C# compiler just throws its hands up and says “The call is ambiguous between the following methods or properties” Well that stinks.

The easiest solution would just be to implement an overload that implemented (string,string) and then you couldn’t find yourself in this situation. So, you may be thinking, well, wouldn’t this always be caught by the compiler?

Can The Compiler Save Us?

Well, let’s consider the situation where you have an overload with (dynamic, dynamic), (string, dynamic), and (dynamic,string). Then we have some code that looks like this:


dynamic val1 = "test";
dynamic val2 = "test2";
Console.WriteLine(DoSomethingDynamic(val1, val2));


Ahhhhhhhhhh! Brain teaser. What do you think will happen? Well, we are dealing with all dynamic variables here, so this is going to compile. And when we run it, do you think that the method with the (dynamic, dynamic) signature will be called? That might make sense at first, but consider that dynamic variables perform method overload resolution at runtime. So, those variables are dynamically typed, but they are strings.

So what happens at runtime is that we determine that those variables are strings, and we try to find out what method overloads are available… and we find the ambiguity… at runtime! Here is the proof:

Did you happen to notice the references to “object” in the method signatures? Where did that come from? Well, it just so happens that dynamic isn’t really a type. It is just an object with a bit of extra behavior added during compilation.

Dynamic Is Special

So, if we were to look at the reflected code for this we would see that the two variable declarations look like this:


object val1 = "test";
object val2 = "test2";

Okay, but what about that special behavior we were talking about? Well that comes in where these variables are used. In this case we have two method calls. The first one is to DoSomethingDynamic and the next is to Console.WriteLine. In order to invoke these methods with dynamic variables we need to create these things called “call sites”. Call sites are merely objects that represent a call to a method which is created at runtime during the first invoke. These classes are what allow the method resolution and caching on each call to occur at runtime. They look something like this (truncated for brevity):


private static void Main(string[] args)
{
object val1 = "test";
object val2 = "test2";
if (o__SiteContainer0.p__Site1 == null)
{
o__SiteContainer0.p__Site1 =
CallSite<Action>.Create(...);
}
if (o__SiteContainer0.p__Site2 == null)
{
o__SiteContainer0.p__Site2 =
CallSite<Func>
.Create(...);
}
o__SiteContainer0.p__Site1.Target.Invoke(...);
}

The importance of this is that when you compile a method which has dynamic parameters but is being passes statically typed variables, there doesn’t really need to be any special behavior when the method is invoked. It is inside of the method where we are dealing with these dynamic paramters that we will start seeing CallSites get created. So, the method will look like this:


[return: Dynamic]
public static object DoSomethingDynamic([Dynamic] object dyn1, [Dynamic] object dyn2)
{
if (o__SiteContainer4.p__Site5 == null)
{
o__SiteContainer4.p__Site5 =
CallSite<Func>.Create(...);
}
return o__SiteContainer4
.p__Site5.Target.Invoke(...);
}

Hmmm, so when we are doing method resolution the (dynamic, dynamic) method just looks like (object, object)! So does that mean if we did this:


dynamic val1 = new Object();
dynamic val2 = new Object();
Console.WriteLine(DoSomethingDynamic(val1, val2));

That it would then call the method with the (dynamic, dynamic) signature? Well, yes it does. 🙂 Phew. And with that, we can now get a better picture of the implications of method overloading and our use of dynamic.

Reflecting

So, what does all of this mean? Why is it important? Well, it depends on how you look at it. Are you going to have to deal with method overloading involving dynamic very often? Probably not. Is it interesting to see how much thought and effort it takes in order to design a feature like this? You bet it is.