A (very) Interesting LINQQuery

I was reading ASP .NET forums when I found this post :

The scenario is :

We have input data in this format :

EmpId Name wages
1 Bilal $200
1 Bilal $300
1 Bilal $400
2 Haris $500
2 Haris $600

The required output is in following way:
Name OriginalWage DuplicateWage
Tom $200 $300
Tom $200 $400
Rob $500 $600

We need to combine each person first record to all other records using LINQ. So I decided to write my own solution of this problem.

I created this sample input data:
For sample input data in c#:
var ret = new[]{
new { SNO=1,Name="Tom",Wages=200 },
new { SNO=1,Name="Tom",Wages=300 },
new { SNO=1,Name="Tom",Wages=400 },
new { SNO=2,Name="Rob",Wages=500 },
new { SNO=2,Name="Rob",Wages=600 },
new { SNO=3,Name="John",Wages=700 },
new { SNO=3,Name="John",Wages=800 },
new { SNO=3,Name="John",Wages=500 },
new { SNO=3,Name="John",Wages=600 }
};

To achieve the results as per our requirment, i wrote this LINQ query:

var mylinqquery= (from item in result
group item by item.empid into group
let one = group.First()
let second = group.Skip(1).Select(x => new { Name = emp.Name, Wage = emp.Wages, Dup =x.Wages})
select second).SelectMany(x=>x).ToList();

Here is our desired output:

linq-group-by

Advertisements

What is the difference between Grid view,Data list, and repeater?

Grid view and data grid by default display all the data in tabular format i.e. in table and rows. Developer has no control to change the table data display of datagrid.

Data list also displays data in a table but gives some flexibility in terms of displaying data row wise and column wise using the repeat direction property.

Repeater control is highly customizable. It does not display data in table by default. So you can customize from scratch the way you want to display data.

Leverage the C# Preprocessor

Like other languages in the C-family, C# supports a set of ‘preprocessor’ directives, most notably #define, #if and #endif (technically, csc.exe does not literally have a preprocessor as these symbols are resolved at the lexical analysis phase, but no need to split hairs…).

The #define directive allows you to set up custom symbols which control code compilation. Be very aware that unlike C(++), C#’s #define does not allow you to create macro-like code. Once a symbol is defined, the #if and #endif maybe used to test for said symbol. By way of a common example:

#define DEBUG
using System;

public class MyClass
{
public static void Main()
{
#if DEBUG
Console.WriteLine(“DEBUG symbol is defined!”);
#endif
}
}
When you use the #define directive, the symbol is only realized within the defining file. However if you wish to define project wide symbols, simply access your project’s property page and navigate to the “Configuration Properties | Build” node and edit the “Conditional Compilation Constants” edit field. Finally, if you wish to disable a constant for a given file, you may make use of the #undef symbol.

Why Doesn’t C# Implement “Top Level” Methods?

C# requires that every method be in some class, even if it is a static method in a static class in the global namespace. Other languages allow “top level” functions. A recent stackoverflow post asks why that is.

I am asked “why doesn’t C# implement feature X?” all the time. The answer is always the same: because no one ever designed, specified, implemented, tested, documented and shipped that feature. All six of those things are necessary to make a feature happen. All of them cost huge amounts of time, effort and money. Features are not cheap, and we try very hard to make sure that we are only shipping those features which give the best possible benefits to our users given our constrained time, effort and money budgets.

I understand that such a general answer probably does not address the specific question.

In this particular case, the clear user benefit was in the past not large enough to justify the complications to the language which would ensue. By restricting how different language entities nest inside each other we (1) restrict legal programs to be in a common, easily understood style, and (2) make it possible to define “identifier lookup” rules which are comprehensible, specifiable, implementable, testable and documentable.

By restricting method bodies to always be inside a struct or class, we make it easier to reason about the meaning of an unqualified identifier used in an invocation context; such a thing is always an invocable member of the current type (or a base type).

Now, JScript.NET has this feature. (And in fact, JScript.NET goes even further; you can have program statements “at the top level” too.) A reasonable question is “why is this feature good for JScript but bad for C#?”

First off, I reject the premise that the feature is “bad” for C#. The feature might well be good for C#, just not good enough compared to its costs (and to the opportunity cost of doing that feature instead of a more valuable feature.) The feature might become good enough for C# if its costs are lowered, or if the compelling benefit to customers becomes higher.

Second, the question assumes that the feature is good for JScript.NET. Why is it good for JScript.NET?

It’s good for JScript.NET because JScript.NET was designed to be a “scripty” language as well as a “large-scale development” language. “JScript classic”‘s original design as a scripting language requires that “a one-line program actually be one line”. If your intention is to make a language that allows for rapid development of short, simple scripts by novice developers then you want to minimize the amount of “ritual incantations” that must happen in every program. In JScript you do not want to have to start with a bunch of using clauses and define a class and then put stuff in the class and have a Main routine and blah blah blah, all this ritual just to get Hello World running.

C# was designed to be a large-scale application development language geared towards pro devs from day one; it was never intended to be a scripting language. It’s design therefore encourages enforcing the immediate organization of even small chunks of code into components. C# is a component-oriented language. We therefore want to encourage programming in a component-based style and discourage features that work against that style.

This is changing. “REPL” languages like F#, long popular in academia, are increasing in popularity in industry. There’s a renewed interest in “scripty” application programmability via tools like Visual Studio Tools for Applications. These forces cause us to re-evaluate whether “a one line program is one line” is a sensible goal for hypothetical future versions of C#. Hitherto it has been an explicit non-goal of the language design.

(As always, whenever I discuss the hypothetical “next version of C#”, keep in mind that we have not announced any next version, that it might never happen, and that it is utterly premature to think about feature sets or schedules. All speculation about future versions of unannounced products should be taken as “for entertainment purposes only” musings, not as promises about future offerings.)

We are therefore considering adding this feature to a hypothetical future version of C#, in order to better support “scripty” scenarios and REPL evaluation. When the existence of powerful new tools is predicated upon the existence of language features, that is points towards getting the language features done.

Why is deriving a public class from an internal class illegal?

In C# it is illegal to declare a class D whose base class B is in any way less accessible than D. I’m occasionally asked why that is. There are a number of reasons; today I’ll start with a very specific scenario and then talk about a general philosophy.

Suppose you and your coworker Alice are developing the code for assembly Foo, which you intend to be fully trusted by its users. Alice writes:

public class B
{
public void Dangerous() {…}
}

And you write

public class D : B
{
… other stuff …
}

Later, Alice gets a security review from Bob, who points out that method Dangerous could be used as a component of an attack by partially-trusted code, and who further points out that customer scenarios do not actually require B to be used directly by customers in the first place; B is actually only being used as an implementation detail of other classes. So in keeping with the principle of least privilege, Alice changes B to:

internal class B
{
public void Dangerous() {…}
}

Alice need not change the accessibility of Dangerous, because of course “public” means “public to the people who can see the class in the first place”.

So now what should happen when Alice recompiles before she checks in this change? The C# compiler does not know if you, the author of class D, intended method Dangerous to be accessible by a user of public class D. On the one hand, it is a public method of a base class, and so it seems like it should be accessible. On the other hand, the fact that B is internal is evidence that Dangerous is supposed to be inaccessible outside the assembly. A basic design principle of C# is that when the intention is unclear, the compiler brings this fact to your attention by failing. The compiler is identifying yet another form of the Brittle Base Class Failure, which long-time readers know has shown up in numerous places in the design of C#.

Rather than simply making this change and hoping for the best, you and Alice need to sit down and talk about whether B really is a sensible base class of D; it seems plausible that either (1) D ought to be internal also, or (2) D ought to favour composition over inheritance. Which brings us to my more general point:

More generally: the inheritance mechanism is, simply the fact that all heritable members of the base type are also members of the derived type. But the inheritance relationship semantics are intended to model the “is a kind of” relationship. It seems reasonable that if D is a kind of B, and D is accessible at a location, then B ought to be accessible at that location as well. It seems strange that you could only use the fact that “a Giraffe is a kind of Animal” at specific locations.

In short, this rule of the language encourages you to use inheritance relationships to model the business domain semantics rather than as a mechanism for code reuse.

Finally, I note that as an alternative, it is legal for a public class to implement an internal interface. In that scenario there is no danger of accidentally exposing dangerous functionality from the interface to the implementing type because of course the interface is not associated with any functionality in the first place; an interface is logically “abstract”. Implementing an internal interface can be used as a mechanism that allows public components in the same assembly to communicate with each other over “back channels” that are not exposed to the public.

ASP.NET MVC Ajax CSRF Protection With jQuery 1.5

Wow, what a mouthful that title is! Anyways, if you can decipher the title, then you are in the right place. While working on Epic Win Hosting I decided that I wanted to put some groundwork in place to allow for a much more dynamic site in the future. As a result of that choice, I used Backbone.js for a good portion of the page interactions. If you haven’t used Backbone, then you owe it to yourself to go check it out. I’ll also be sure to blog about it in the near future.

Since I decided to do a good portion of the UI using backbone, and many of the forms that we post use the jQuery forms plugin, I wanted to make sure that we were protected from CSRF attacks that might come in via Ajax calls. Also, since Backbone.js uses HTTP verbs such as DELETE and PUT, I decided that I wanted the same CSRF protection to work for those as well.

Since the default ASP.NET MVC CSRF protection only works with form posts, I knew I couldn’t use it. But at the same time, I didn’t want to develop my own solution, since that is probably almost as dangerous as not doing it at all. Unless you really know what you are doing, you probably want to avoid writing too much security related code. So instead of implementing it myself, I decided to do some surgery.

I knew that I wanted my CSRF protection to work for any kind of data. So if I did a normal form post or posted some raw JSON data, I didn’t want to have to do things differently. So I knew I couldn’t put the CSRF verification token in the payload of the request in the way that ASP.NET MVC does it (In case you didn’t know, it renders as a hidden field on a form and posts in a normal manner). So instead, I decided to take the Rails approach, and shove the CSRF info into the headers of my Ajax request.

Sidenote:
If you’ve seen the posts about the CSRF vulnerability in Rails recently, don’t worry. The problem with Rails, as far as I can decipher, was that they weren’t requiring the header token to be passed during Ajax calls, instead relying on the same-origin limitations built within browsers to assume that Ajax calls were valid. There appears to be a bug in Flash that allows an attacker to get around this.
Since jQuery 1.5 had come out just recently and I had been drooling over its Ajax rewrite, I knew that this was the perfect time for me to get some good use out of it. jQuery 1.5 introduced a new feature called “Prefilters”, in a nutshell, what prefilters allow you to do is to change the request in some way before it is sent to the server. This would be a perfect place for me to shove my CSRF data into the request!

WARNING: Doing the following may lead to tears and tiny kittens getting hurt…proceed at your own risk.

The first thing I had to do was to rip the anti-forgery token code out of ASP.NET MVC. I couldn’t use the version in MVC 3 because the anti-forgery code was pulled out and put into a non-OSS library. Boo! But, since the MVC 2 code was released under MS-PL (thanks for the tip Phil!), I can legally do this, and more importantly, tell you about it! Thankfully the anti-forgery code is not really woven too deeply into any other classes, so it was easy enough for me to pull out. I went ahead and renamed all of these classes so that I wouldn’t get them confused with the classes in the MVC framework itself.

The Html Piece

I then had to create my own HTML helper to write out the anti-forgery token into the page. I did this by copying the Html.AntiForgeryToken helper and replaced the content with my own classes. The code in the normal anti-forgery helper renders a hidden field, but we don’t want that. So I modified the helper to instead render a meta tag that we can put into the head of our HTML document.

This code renders a html hidden field, but honestly, we don’t really want that. So we need to go into the AntiForgery class and modify it to return a meta tag that we can throw into our header:

?
1
2
3
4


@Html.AjaxAntiForgeryToken()

Easy stuff. We can just put this code in your master page or your root layout page, wherever your html head is located.

The jQuery Piece

Now we need jQuery to grab this CSRF token and use it in our requests. Due to the fancy new jQuery prefilters, this is actually a tiny little chunk of code:


$.ajaxPrefilter(function (options, originalOptions, jqXHR) {
var verificationToken = $("meta[name='__RequestVerificationToken']").attr('content');
if (verificationToken) {
jqXHR.setRequestHeader("X-Request-Verification-Token", verificationToken);
}
});

Very clean, and we don’t have to mess with any of our requests (you could technically do this in jQuery 1.4 if you used ajaxSetup’s “beforeSend” option). In fact, most of my requests are going through Backbone.js and so I don’t have access to them at all! In this chunk of code, we just grab the CSRF token and shove it into a header. Couldn’t get any easier! We don’t care what kind of request it is, because we can always shove the header in, it is up to the action what to do with it.

The Backend Piece

Now that we have this verification token getting passed in the header, we just need to verify this token somewhere. The first thing that I did was to copy the ValidateAntiForgeryTokenAttribute class.

Inside of the ValidateAntiForgeryTokenAttribute there is a line that looks like this:


string formValue = context.Request.Form[fieldName];

All we need to do is to grab the value from the header instead of the field:


string formValue =
context.Request.Headers[EpicAntiForgeryData.GetAntiForgeryTokenHeaderName()];

Now we can let the class do the rest of the work.

And Finally…

Now all you have to do is put your new AjaxValidateAntiForgeryToken attribute on your controller actions that you want to validate. When your Ajax event fires, jQuery will append the token to the call then when it hits your controller action, the attribute plucks the value out of the header and the cookie, then it does its magic. You just have to keep in mind that you can’t use the same classes for posting forms regularly anymore, since the header won’t get appended to a non-ajax post.

What ASP.NET MVC Could Learn From Rails

Most developers that are interested in bettering themselves really do want to hear both sides of the story. They want to understand the strengths of other platforms, not so they can move, but so that they can understand how to make their own framework/platform better. When you read this post, I hope you will read it with that in mind. I hope you will see this not as me criticizing your platform, or someone else’s work, but instead as me saying “here is what I think is cool about this other platform, how can your platform get in on that?”

A Tale Of Two Frameworks

There was a time when Ruby on Rails was the hottest thing on the block, and many ASP.NET developers pined for the day when we could leave behind controls and viewstate and move into the glorious world of web development. Cause what we were doing wasn’t web development, don’t kid yourself. Then ASP.NET MVC came out, and a lot of people (myself included) jumped on that bandwagon and have never looked back.

I have been very happy with ASP.NET MVC for a long time. It gives me a simple to use framework that gets out of my way and lets me write real web applications. And I love it! But there is just one problem, websites are quite often complex beasts and ASP.NET MVC only solves a small part of the puzzle. Every project I start I have to pull in a membership system, migrations, an ORM, an IOC container, logging, testing framework, etc… I have to find and plugin all of these tools every time I want to do build another site. (Nuget has helped with the finding and integrating part immensely!)

I guess this is to be expected though, ASP.NET MVC wasn’t really meant to be an opinionated framework. They wanted to produce a framework which would have broad appeal across the hundreds of thousands of .NET developers out there. Not an easy task! They also wanted a smaller framework that they could release in short cycles. Based on their goals, I’d say that they have done a great job.

But because of this they couldn’t just pick different frameworks and include them as defaults. First of all, there were few choices that would not have been controversial. Secondly, if they had made controversial decisions, the framework would have been rejected by early adopters and not seen the adoption that it has now.

Creating a generic framework is fine, as long as you understand what you are giving up to get there. The power of frameworks like Ruby on Rails, Django, or CakePHP is that they span the whole stack. While this might lock you down to doing things in a certain way, in the end the benefit derived from the tight integration is very often worth it.

Walking Through The Problem

In order to demonstrate this, let’s take the example of authentication in a web application. In ASP.NET MVC we have the normal ASP.NET authentication mechanism which we can leverage in order to have users and roles, but as most of us know, a lot of people end up rolling their own or looking for an outside solution. Why? Because the user is part of our domain, and the ASP.NET authentication system forces us to use their tables, stored procs, db deployment tools, etc… It is clunky and not extensible at all. So let’s say we want to build an alternative, what would we have to do?

The first we need is a model that represents a user, we can either include this in the framework, or maybe just let the user define the class that they need, and then implement an interface for us to interact with. Okay, so how do we get the tables in the database? Well, we could create some scripts and put them in the project, then the developer can run those to create the user tables. That doesn’t seem very awesome, what if they aren’t running SQL Server? Yes, some of us do use .NET without SQL Server. What if they need to automate creating the database for testing? Also, how much does it suck to have to manually run the scripts on every machine you need to deploy to?

Well, we could write code to connect to the database and run the DDL to create a table. That would be more automatic. But again, what database are they running, which connection do we use, how do we let the user modify their user object to fit their needs if we need to create the table? Arrrrgh! Okay, let’s just punt on that and say that we will just create scripts for a few databases, and let the user modify and then run them manually, or integrate the scripts into their build process. That sucks.

Now how do we get the user model in and out of the database? We don’t really know how the developer is going to be doing data access, so we can’t pull the user from the database within the framework, we will have to force the user to implement a class that we can call and get users, in the same way that the provider model is currently implemented in .NET. This isn’t terrible, just inconvenient.

So now we are at the point where we have a database table, and we can pull users from the database. The only problem is that the developer has had to implement an interface on his user object, manually edit and run scripts, implement a persistence mechanism (most likely an ORM), and implement a provider to allow us to perform CRUD operations on users. And we are just getting started! What about creating a controller, views, adding routes, validation, sending e-mails, etc…? There are a lot of pieces that go into a successful authentication mechanism, this isn’t easy stuff.

In the ASP.NET MVC world, all of these problems may seem overwhelming. Sure, we could implement a full solution, but we’d have to make a lot of assumptions and force the user into tools that they might not be using. Or we include our own tools, and bypass their stack in the same way that the ASP.NET providers work. This disconnect is what frameworks like Sharp Architecture try to solve, but third party libraries could never depend on frameworks like this until their adoption reached a critical mass. Otherwise they would be spending a lot of effort, for every little return.

How Would You Do This In Rails?

In Rails there are a few different frameworks available to do authentication. The one that I have used is Devise. In order to use Devise, the first thing we would do is to add the gem to our Gemfile (a gem is like a NuGet package, if you don’t know what a NuGet package is, go here) with a line like this:

?
1
gem ‘devise’, ‘~> 1.3.4’
Then we just run “bundle install” and the gem is installed into our project. Now we have to configure devise. We can run this command (you can type out ‘generate’ instead of ‘g’ if you want):

?
1
rails g
And it will show you all of the generators available to you. You will see a section for Devise. Devise is able to hook into these generators and do some really powerful stuff. We can see from this list that there is a generator for devise called “devise:install”. We can run that:

?
1
rails g devise:install
Now Devise has installed its configuration file into your app, and you are almost ready to go. You’ll notice there is another generator just called “devise”, if we run that it tells us to give it a model name so it can generate a User model for you. Because everything in Rails has a place, you simply run this command:

?
1
rails g devise User
And it drops User model into your /app/models folder, generates a migration for you, and puts routes into your routes.rb file. Now all I have to do is run my migrations (I can edit the migration or create another migration to edit my user table, allowing me to completely customize my user):

?
1
rake db:migrate
And now my database tables are created and everything is ready to go. I can put this on my controller:

?
1
before_filter :authenticate_user!
And now that controller will redirect me to a login/signup page, allow me to create an account, send confirmation e-mails (if I want), etc… I also have a number of helper methods that I can run like “user_signed_in?” or “current_user” in order to interact with users on a more fine grained level.

What Happened There?

Well, you saw the power and productivity of a full stack framework. Devise knows what tools you are using (or how to integrate with them), and where everything goes. Because of this we don’t need to tell it how to create a model, create database tables, save/retrieve users from the database, how to connect to the database, how to send e-mail, etc… These are all configuration and sensible defaults that have already been setup in your application.

And for anything that isn’t automated, there are Rails generators that we can hook into to create config files, add routes, views, and any other assets. We are able to do this because most everything has a place in a Rails app. Rails is a very opinionated framework, and it shows. If you follow the “Rails Way” then things will be smooth and super productive.

How Do We Make ASP.NET MVC Better?

That is a very difficult question. You see, the Rails community long ago agreed to go along with the “Rails Way” in exchange for increased productivity and integration. I’m not sure that the .NET developer community would respond favorably to the kinds of changes which would be required to bring ASP.NET MVC up to the level of integration that Rails enjoys.

This doesn’t mean that they can’t come close though. One thing they could do is start to favor certain tools more in the framework. Start to setup conventions and tools that developers are expected to use, and let them know that if they stray, they are on their own. However, the problems here are obvious. The tools that are going to be made the defaults will be Microsoft’s own tools, like Entity Framework or MEF. This could create some serious friction with the developer community if they start to integrate too tightly with these tools.

An MVC Uber-Framework?

Another interesting path would be to see some full stack frameworks spring up on top of ASP.NET MVC. Maybe Microsoft could implement another framework on top of ASP.NET MVC which would be fully integrated, but could be tossed out if you just wanted just the underlying web framework. As long as this framework was simple and powerful enough, you would probably see a large number of developers jump on board, which would give OSS creators a reason to integrate with it. On the MS platform, a tool like this usually has to come from within Microsoft itself, otherwise it has an extremely hard time gaining any momentum.

I can see a lot of eyes rolling and hands shooting up after that last paragraph, but hear me out. If Microsoft continues down the path they are now, and they start integrating things like Entity Framework and MEF as defaults, but not truly integrated, then all we are going to get is a bunch of code generation when we start a project. This isn’t going to help OSS developers extend the framework. But, if they could instead tease out the web layer from the rest, and build a more integrated application stack on top of it, that would go a lot further. I’d love to hear people’s feedback on this.

Will It Change?

I don’t know! ASP.NET MVC is a fine framework. I just don’t feel like it is as productive as it could be. Good, solid, testable web development just doesn’t have to be as hard as it currently is with ASP.NET MVC. ASP.NET MVC has given us a good start on the web layer, but hooking up our applications to this front end is often repetitive, clumsy, and cumbersome. All I want is for my whole stack to play nicely together, is that really too much to ask?