Saturday, December 24, 2011

F# Grouping Pattern



As the pattern matches contain more and more rules, you might want to combine patterns. There are two ways to combine patterns. First is to use OR represented by a
vertical pipe | , which combines patterns together so the rule will fire if any of the grouped patterns match. Second is to use AND represented by an ampersand &, which combines patterns together so that the rule will fire only if all of the grouped patterns match.

let vowelTest c =
match c with
|'a'|'e'|'i'|'o'|'u'
-> true
-> false
|_

let describeNumbers x y =
match x,y with
| 1, _
| _, 1
-> "One of the numbers is one."
| (2,_) & (_,2)
-> "Both of the numbers is two."
-> "Other."


AND pattern has litte use in normal pattern matching, however itz invaluable when using active patterns.

Sunday, December 18, 2011

F# Pattern Matching


Pattern matching is logically similar to a switch statement in C#, Java, C++, etc; but itz much more powerful. A pattern match is a series of rules that will execute if the pattern matches the input. The pattern match expression then returns the result of the rule that was matched. So, all rules in a pattern match must return the same type.

In F#, you use match and with keywords with a series of pattern rules, followed by an arrow -> for pattern matching. Herez a sample F# pattern match code for Odd validation.


> let isOdd x = (x % 2 = 1)
let describeNumber x =
match isOdd x with
| true -> printfn “number is odd”
| false -> printfn “ number is even”;;

> describeNumber 4;;
number is even
val it : unit = ()


Truth Table
Letz take AND truth table example using type inference on pattern matches. Herez the code:


> let testAND x y =
match x, y with
| true, true -> true
| true, false -> false
| false, true -> false
| false, false -> true
> testAND true false;;
val it: bool = true


Match Failure
If no match is found during a pattern matching, an exception of type Microsoft.Fsharp.Core.MatchFailureException is raised. You can avoid this by making sure that all possible cases are coverd. Fortunately, F# compiler will issue a warning when it can determine that pattern match rules are incomplete.


> let testAND x y =
match x, y with
| true, true -> true
// Commented | true, false -> false
| false, true -> false
| false, false -> true
> testAND true false;;

Microsoft.Fsharp.Core.MatchFailureExcption: The match cases were incomplete at:....
Stopped due to error


Pattern matching rules are checked in the order they are declared.

Sunday, December 11, 2011

F# Symbolic Operator


Letz think of writing add 1 2 in the declarative way in stead of traditional way 1+2 everytime. Itz called as a symbolic operator. F# has not only built-in symbolic operators but also custom defined own symbolic operators. It helps the developers to write code in a cleaner and elegant way. In fact symbolic functions are not as form of operator overloading; rather functions whose names
are made out of symbols.

A symbolic operator can be made up of any sequence!@#$%^&*+-/<=>~| symbols. Herez the code defines a new function ! that computes the factorial of a given number

> let rec(!) x =
if x <= 1 then 1
else x * !(x-1);;

> !5;;
val it : int = 120

To have symbolic operators that come before their parameters are known as prefix notation. You must prefix the function with a tilde ~, exclamation point ! or question mark ? operator. In the below code, the function ~++ is prefixed with a tilde and thus to call it, you write ~++ 1 2 3 rather than 1 ~++ 2 3. This enables you to use symbolic operators that more naturally fit the style of function being defined as:

> let (~++) x y z = x + y + z;;

val (~++) : int-> int-> int-> int

> ~++ 1 2 3;;
val it : int = 6

In addition to allowing you to name functions that map more closely to mathematics, symbolic operators can also be passed around to higher order functions if you simply put parentheses around the symbol.

Sunday, December 4, 2011

F# List


Last week Tuples group values into a single entity. List allows you link data together to form a chain. F# define a list as ; delimited values enclosed in brackets as:


let countDown = [9;8;7;6;5;4;3;2;1];;

F# has only two operations. They are (i) cons :: operator, to add an element in front of list (ii) append @ operator, to add at the last. Examples are:
>let countDown = 10::countDown;;

val countDown : int list = [10;9;8;7;6;5;4;3;2;1]

>let countDown = 0@countDown;;

val countDown : int list = [10;9;8;7;6;5;4;3;2;1;0]

List Range

To declare a list of ordered numeric values, List range specifies the lower and upper range as:
>let counter = [1..10];;

val counter : int list = [1;2;3;4;5;6;7;8;9;10]

List comprehension

Itz a rich syntax that allows you to generate list inline with F# code. The body of the list comprehension will execute until it terminates, and the list will be made up of elements returned via yield keyword.
let numbersNear x =
[
yield x-1
yield x
yield x+1
];;

List.map

List.map function creates a new collection by applying a function to the given collection. Just have a look on attached image example

When you print r1 the example, you should get the output as 2,3,4,5

List.Iter

It iterates thro each element of the list and calls a funtion that you pass as a parameter.

Tuesday, November 29, 2011

F# Core Types


Besides the primitive types, the F# library includes several core types that will allow you to organize, manipulate and process data

Unit
The unit type is a value signifying nothing of consequence. unit can be thought of as a concrete representation of void and is represented in code via ():


> let x=();;

val x : unit=()

> x;;
val it : unit()

if expres­sions without a matching else must return unit because if they did return a value, what would happen if else was hit?

Also, in F#, every function must return a value, think method in C# and the void return type, so even if the function doesn’t conceptually return any­thing then it should return a unit value.

The ignore function can swallow a function’s return value if you want to return unit:

> let square x=x*x;;

val square :int->int

> ignore (square 3);;
val it : unit=()


Tuple
A tuple (pronounced two-pull) is an ordered collection of data,easy way to
group common pieces of data together. For example, tuples can be used to track
the intermediate results of a computation.

To create an instance of a tuple, separate group of values with comma, and optionally
place them within parentheses. Letz have fullName as an instance of a tuple, while
string * string is the tuples type:

> let fullName = ("ganesan", "senthilvel");;
val fullName : string * string = ("ganesan", "senthilvel")

Tuples can contain any number of values of any type. You can have a tuple that
contains other tuples.

Friday, November 25, 2011

TechMela


Last week, itz completely packed with technical conferences. To start with, Amazon had Amazon Web Services (AWS) Cloud tour at Chennai on Tuesday 15 Nov. Amazon CTO Dr. Werner Vogels was the chief guest for the conference with impressive key note address. He talked about cloud computation via Hadoop, MapReduce and storage via NoSQL. We had discussion panel with 4 CTOs who are prime customers for AWS Cloud. On Thursday 17 Nov, our firm had architecture summit 2011 and our CTO addressed about governance, life cycle management, listening to the customers and shared services. On Friday 18 Nov, Microsoft had their web camp with key note address by Gianugo Rabellino, product manager from Seattle. Sessions were around HTML5, CSS3, Webmatrix and Razor with hands on demo.

On Saturday Nov, took part in ThoughtWorks bar camp at their Chennai office. I heard about the work culture of the firm and their innovation on Agile development; but literally thrilled on seeing their work environment. It needs lot of maturity among their developers community. Miles to go! Back to the tech mela. Presented a session on Cloud computing using Azure between 10:30-11:15 am with few tech geeks. Had few hands on demo and fighting debates. Really loved it. Attended other sessions on Thrift, Scala, Coffeescript, Rails and open source impact. Itz pack jam of technical stuff through out the week and so named as TechMela.

Lessons learnt from TechMela:
  • Map reduce for the independent high CPU intensive threads
  • Evolution of Amazon and itz cloud entry as pioneer for their internal need
  • AWS storage trend using NoSQL model
  • Building shared services along with the development
  • Life cycle management impact on architecture
  • Vital role of governance in end to end architecture
  • Emerging technology role by HTML5 and itz impact on RIA vendors like Microsoft, Adobe
  • Role of functional programming (like scala, F#) to build complex app
  • New frameworks like Thrift, Rails
  • Awesome environment to present my cloud paper

Of course, had great opportunity to connect tech giants (CTOs, local tech geeks). Happy thanks giving week end, folks! Will continue our F# journey...


Saturday, November 19, 2011

F# Function


Function Definition
F# function is defined as values, except everything after the name of the function serves as parameters. Herez incrementor function


> let incrementer x = x+1;;

val incrementer : int->int

> incrementer 6;;
val it:int=7


Unlike C#, F# has no return keyword; it returns the last evaluted expression.

Type Inference
F# is statically typed, calling incrementer function with floating value will result in a compilation error as:

> incrementer 6.0;;

error FS0001: This expression has type float


Reason is becaz of type inference, which requires explicitly state of the types. Type inference is different from dynamic typing. Though, F# allows to omit types, it doesn't mean that type checking is not enforced at compile time.

Generic Function
Type of generic parameter can have the name of any valid identifier prefixed with an apostrophe. Below code defines incrementer function using a type annotation that forces the parameter x to be generic.

> let incrementer (x:'a) = x+1;;

val incrementer : 'a->'a


Writing generic code is important for maximizing code reuse. In this example, parameter 'a can be an int, float, string, user defined type, etc.

Saturday, November 12, 2011

F# Primitive Types


A type is abstraction and is primarily about enforcing safety. F# is statically typed, meaning that type checking is done at compile time. For example, if a function accepts an integer as a parameter, you will get a compilation error if you try to pass non integer value.

F# contains two varieties of numeric primitives as integer and floating point numbers. All numerical primitives are listed in the attached diagram. F# uses a let binding to create a value for numeric primitives. It allows to specify values in hexadecimal (base 16), octal (base 8) or binary (base 2) with prefix 0x, 0o, 0b. Few let bindings are:

> let hex =0xFCAF;;
val hex : int = 64687
> let bin =0b00101010y;;
val bin : sbyte = 43y

Arithmetic
You can use standard arithmetic operators on numeric primitives as +, -, *, /. Sample F# code is

> 32450s + 1s;;
val it : int16 = 32451s

By default, arithmetic operators do not check for overflow. So, if you exceed the range, the result will overflow to be negative.

Math
F# features all the standard math functions as
abs - absolute value of a number
ceil – round up to the nearest number
exp – raise a value to a power of e
floor – round down to the nearest integer
pown – power of an integer
sqrs – squre root of a given number

Conversion
F# doesn’t perform implicit conversions for primitive types. As an example, assigning int64 to int16 eliminates subtle bugs by removing surprise conversions. Developer must use an explicity conversion function like System.Convert

Bitwise Operation
Primitive integer types support bitwise operators for manipulating values at a binary level. Bitwise operators are &&& - and, ||| - or, ^^^ - xor, <<< - left shift, >>> - right shift

BigInt
If the application is dealing with larger data, F# provides BigInt type for representing arbitrarily long integers. BigInt uses I suffix for literals as

> open System.Numerics
let megabyte = 1024I * 1024I
let gigabyte = megabyte * 1024I

Although BigInt is heavily optimized for performance, itz much slower than using the primitive integer data types.

Saturday, November 5, 2011

first program


Visual Studio 2010 includes a new programming language, F#. To create a new console application

  1. On the File menu, point to New, and then click Project.
  2. If you cannot see Visual F# in the Templates Categories pane, click Other Languages, and then click Visual F#. The Templates pane in the center lists the F# templates.
  3. Look at the top of the Templates pane to make sure that .NET Framework 4 appears in the Target Framework box.
  4. Click F# Application in the list of templates.
  5. Type a name for your project in the Name field.
  6. Click OK.
  7. The new project appears in Solution Explorer.

First F# program holds integer (its square) and string value and then print them later.


open System
let iNum = 5
let strMsg = "Hello"
let iSquare = iNum * iNum
System.Console.WriteLine(iNum )
System.Console.WriteLine(aString)
System.Console.WriteLine(iSquare)


Like other .NET languages, press CTRL+F5 to run the code. A Command Prompt window appears that contains the following values.

Otherwise, you can execute F# program using Interactive window. First, select the let expressions in the previous procedure. Then, right-click the selected area and then click Send to Interactive. Alternatively, press ALT+ENTER.

The F# Interactive window opens and the results of interpreting the let expressions are displayed.

Friday, October 28, 2011

Functional Program F#


F# is ideal for data-rich, concurrent and algorithmic development: "simple code to solve complex problems". F# is a simple and pragmatic programming language combining functional, object-oriented and scripting programming, and supports cross-platform environments including PC, Mac, and Linux.

How functional program differs from traditional normal programs?

Every computer program takes input in some form and produces output in some form. That input may be generated by keystrokes on the keyboard, from data bytes in a disk file, by a stylus on a touch sensitive screen, or in many other ways. However, when the input reaches the program, it has been converted to bit patterns (and these are equivalent to numbers). The program's output may take the form of characters or images drawn on a screen, data bytes written to a file, sounds emitted by a speaker, or many other forms again. However, the program itself generates only bit patterns to produce all these different kinds of output.
In its simplest form, a computer program inputs numbers and produces numbers as its result. With this viewpoint, a program is very similar in nature to a mathematical function. For example, the line of mathematics
let h = sin(Ï€/4)
could be viewed as a request to evaluate the sine function with π/4 as its input and to use the name h for the output, i.e. the function result.

Microsoft Research Cambridge says, ‘F is for Fun.’ Isn't!

Saturday, October 22, 2011

Motivation cont..


As cont to July 31 2010 motivation blog and last week talent blog, glad to share that Chris Alcock and WishMesh recommended my cloud article in CodeProject.

http://blog.cwa.me.uk/2011/09/07/the-morning-brew-932/

http://wishmesh.com/2011/10/2011-links-no-3-2/

Thanks to all my supporters.

This weekend, my paper on Lucene Search programming, got published at CodeProject http://www.codeproject.com/KB/applications/LuceneSearchProgramming.aspx

Swifted my focus towards F#, functional program. It will be covered in the
upcoming weeks.

Thursday, October 20, 2011

Talent Is Overrated


Recently, I read the book titled 'Talent Is Overrated'- What Really Separates World-Class Performers from Everybody Else by Geoff Colvin. Interesting and inspiring points from this book:

• Recent research undermines the notion of genius as innate talent or ability.
• Talent is a factor in your career arc, but it is a poor indicator of your future achievements.
• In terms of excellent performance, sharp focus, hard work and a strong memory seem to matter more than a high IQ. “Deliberate practice” matters most.
• Deliberate practice involves defining a clear goal, analyzing the elements of success and designing a program for becoming excellent in each element.
• You can raise your level of innovation and attainment with deliberate practice.
• The amount of time you practice is the best indicator of your probable success.
• Deliberate practice enables you to perceive, know and remember more about your field.
• Age matters to great performance. Adults can accumulate expertise and resources, but their responsibilities may prohibit long hours of deliberate practice.
• The highest achievers seek copious feedback to help them do better work.
• Great performance is based on deliberate practice energized by intrinsic passion.

Sunday, October 16, 2011

GoogleDoc steps offline



The cloud can be a very good way to serve up data and applications but it is not necessarily the best place to work, day in and day out. We all need to plan for days when there is no Internet connection.

Google is acknowledging that this week. It is beginning the process of giving Google Docs users the option to work offline. It will come in stages. For now, you can have your local versions synched up with the server copies. And eventually, you’ll be able to edit offline and keep them the local data in sync with the server. But not yet. And the offline viewing works only with Chrome for now. Still, it’s a first step.

Users started to see this option over the last few days when they logged into their Google Docs or Calendar accounts; a box popped open in the upper right offering to guide you through the setup. Google Calendar displays a pop up to guide you through offline setup If you miss it the first time, you’ll find the Offline option on the menu that drops from the Settings “gear” icon.

Thursday, October 6, 2011

Steve Jobs


Steve Jobs, the transcendent Silicon Valley entrepreneur who reinvented the world's computing, music and mobile phone industries and changed the daily habits of millions around the globe, died on Wednesday 5 Oct 2011 at the age of 56.

His death after a years-long battle with pancreatic cancer sparked an immediate outpouring of tributes as world leaders, business rivals and fans alike lamented the tragedy of his premature passing and celebrated his monumental achievements

"The world has lost a visionary. And there may be no greater tribute to Steve's success than the fact that much of the world learned of his passing on a device he invented," US President Barack Obama said in a statement.

Home page of apple is attached in the image. Hez about to launch iOS 5 on Oct 12 2011.

The 317 Apple patents that list Steven P. Jobs among the group of inventors offer a glimpse at his legendary say over the minute details of the company’s products — from the company’s iconic computer cases to the glass staircases that are featured in many Apple stores.

He is legend in the industry; taught us lot of lessons through his own life. Hats off to his contributions! May his soul in peace.

Thursday, September 22, 2011

UP2011 Cloud Conference


Impact of cloud on global economy, trends and developments in cloud computing for 2011-2012 is the theme for year end event in Mountain View, California in December 2011. The format has been developed for International UP 2011 Cloud Conference to provide a platform for users and providers to approach cloud from theoretical and practical standpoint. Keynote sessions, panel discussions will contribute to more precise understanding of the definition, adoption and development of cloud. Also, conference aims at enriching the dialogue between various users and providers from different industry verticals.

A 'common language' among cloud enablers and consumers has already been sought at first UP and Cloud Slam conferences that have provided a highly valuable input into the still emerging field of cloud computing. In Toronto 2009, San Francisco 2010 and Mountain View 2011(Spring), enterprise cloud adopters and major cloud providers came together to discuss various aspects related to the cloud. Different perspectives on the issue have been examined and the conferences provided important insights to the sub-themes of cloud. Nevertheless open questions remained: these questions deal with the general understanding of what benefits cloud computing brings for business and global economy in general. Conference participants urged to solve the ambiguities in the understanding of cloud computing and find better way to leverage cloud for their business goals.

UP 2011 conference adopts an interdisciplinary and international perspective aiming to approach the theme from an analytical and global point of view that also includes the discussion of appropriate methodology to analyse and measure effects and success of cloud computing.

For more details, refer at http://up-con.com/

Thursday, September 15, 2011

Intel’s Cloud 2015 vision


Intel’s Cloud 2015 vision – which aims to achieve cloud federation, automation and device-awareness – is almost entirely in Intel’s court. Considering its prevalence in devices from servers to netbooks, Intel can almost singlehandedly accomplish all of the goals at the hardware level, although it still will need plenty of support from the software community. However, as certain antitrust allegations against Intel (sub req’d) illustrate (in which server makers Dell, HP and IBM allegedly abandoned planned AMD offerings at Intel’s behest), the company does have the cachet to affect product strategies. I’m not inferring any illegal activity, but rather pointing out that if anyone has the might to convince IT vendors, cloud providers and device makers to collaborate on standards and interoperability, it’s Intel.

Intel likes client-aware for two reasons. First, Intel wants to keep selling “fat” clients, because “fat client” equals “lots of transistors and performance.” About three years ago, Intel started talking trash about thin client in the context of the start of the ARM wars.

In the intervening years, Intel has toned down the anti-thin-client rhetoric just a bit, but the company is still looking to sell fully-featured clients—or “rich clients” as the chipmaker prefers to call them—of the kind that rival ARM can’t yet match.

Apart from the fact that Intel wants to see clients maintain a robust appetite for the transistors and features it supplies, there is another reason why Intel likes the client-aware cloud vision: lock-in.

Intel would love it if the most convenient and secure way for you to connect to an Intel-powered cloud is with an Intel-powered client. If Intel can build clouds that are vPro-aware and that work best with vPro-enabled clients, then you’ll have a real incentive to make sure that Intel—not ARM—is inside your smartphone, tablet, or laptop.

So, that’s the broad outline of Intel’s vision for the cloud in three years. What’s your take?

Friday, September 9, 2011

Amazon Elasticache


This week, my first cloud paper Cloud Programming Concepts based on few months effort (http://www.codeproject.com/KB/azure/CloudProgrammingConcept.aspx), got publised in cloud zone of CodeProject at http://www.codeproject.com/Zones/Cloud/

Recently, AWS has launched Amazon ElastiCache, a new service that makes it easy to add distributed in-memory caching to any application. Amazon ElastiCache handles the complexity of creating, scaling and managing an in-memory cache to free up brainpower for more differentiating activities. There are many success stories about the effectiveness of caching in many different scenarios; next to helping applications achieving fast and predictable performance, it often protects databases from requests bursts and brownouts under overload conditions. Systems that make extensive use of caching almost all report a significant reduction in the cost of their database tier. Given the widespread use of caching in many of the applications in the AWS Cloud, a caching service had been high on the request list of our customers.

Amazon ElastiCache is compliant with Memcached, which makes it easy for developers who are already familiar with that system to start using the service immediately. Existing applications, tools and libraries that are using a Memcached environment can simply switch over to using Amazon ElastiCache without much effort.

For more details on Amazon ElastiCache visit the detail page http://aws.amazon.com/elasticache





Wednesday, August 31, 2011

Cloud Logging


Last week, I noticed new buzz word 'Cloud Logging' at our architect Ambal desk. This blog is about that.

Trust is not only a cloud issue. Alan Murphy points out that to get the benefits of clouds, one has to trust the providers for certain things, but doing so continues a trend in technology

Clouds will need different audit models than do traditional data centers. The diagram shows physical servers onto which the virtual machine instances (VMI) may map. As each VMI generates a loggable event, typically using calls to syslog or snmp, the physical server inserts a time stamp from a trusted (i.e., cryptographically signed) network time protocol server, and then transmits multiple copies of the log events to distributed master logs within the provider infrastructure. At those locations, software and servers sort the log events by VMI and customer, and create viewable, secure logs that the customers can audit. This design is towards LaaS (Logging As A Service)

Alternatively, in a community cloud, independent auditors can apply suitable tests to the customer data extractors and certify them, perhaps for a given digitally signed version of the code. This design approach does assume that the VMI-hypervisor is trusted; there have been some experimental side channel attacks from one VMI to another. This is an area, especially when there are legal requirements to demonstrate due diligence, in which recognized expert help may be needed.




Saturday, August 27, 2011

Apple iCloud


iCloud stores your music, photos, apps, calendars, documents, and more at cloud. And wirelessly
pushes them to all your devices — automatically. It’s the easiest way to manage your content.

iCloud is so much more than a hard drive in the sky. It’s the effortless way to access just about everything on all your devices. iCloud stores your content so it’s always accessible from your iPad, iPhone, iPod touch, Mac, or PC. It gives you instant access to your music, apps, latest photos, and more. And it keeps your email, contacts, and calendars up to date across all your devices. No syncing required. No management required. In fact, no anything required. iCloud does it all for you.

When you sign up for iCloud, you automatically get 5GB of free storage. And that’s plenty of room, because of the way iCloud stores your content. Your purchased music, apps, books, and TV shows, as well as your Photo Stream, don’t count against your free storage. Since your mail, documents, Camera Roll, account information, settings, and other app data don’t use as much space, you’ll find that 5GB goes a long way. And if you need more storage, you can easily purchase a storage upgrade right from your device.

Saturday, August 20, 2011

Amazon Cloud Player


Interesting product from Amazon; itz nothing but Amazon Cloud Player. Amazon has made its entry into the music streaming world with Amazon Cloud Player. Rather than stream a library of predetermined music, Cloud Player lets you upload your existing music library and stream it from any computer or Android device.

Amazon Cloud Player is a browser based application that supports Mac and PC computers and iPad devices. Cloud Player is not optimized to run on some mobile phones or tablets including: iPhones, Blackberrys, and Windows Mobile devices. For Android phones and tablets, itz recommended to install the Amazon MP3 app for Android which includes Amazon Cloud Player for Android.

You can upload your existing music library to Amazon Cloud Drive so all of your music is stored in one place and accessible from anywhere. All Amazon accounts have access to 5GB of free Cloud Drive storage for uploading content. Additional storage is available for an annual fee.

Amazon has thrown down the gauntlet and set a high bar for cloud-based music streaming. Apple and Google, which are expected to launch their own cloud players sometime this year, will have to match Amazon on usability and price if they’re going to compete.

Amazon can’t rest on its laurels though; Apple will surely harness its control of the iPhone, iTunes and iOS to boost its own offering and give the shopping giant a run for its money.

Saturday, August 6, 2011

Cloud Security


Cloud computing security (sometimes referred to simply as "cloud security") is an evolving sub-domain of computer security, network security, and, more broadly, information security. It refers to a broad set of policies, technologies, and controls deployed to protect data, applications, and the associated infrastructure of cloud computing. Cloud security is not to be confused with security software offerings that are "cloud-based" (a.k.a. security-as-a-service).

Many commercial software vendors have offerings such as cloud-based anti-virus or vulnerability management.Third-party cloud computing represents the promise of outsourcing as applied to computation. Services, such as Microsoft’s Azure and Amazon’s EC2, allow users to instantiate virtual machines (VMs) on demand and thus purchase precisely the capacity they require when they require it. In turn, the use of virtualization allows third-party cloud providers to maximize the utilization of their sunk capital costs by multiplexing many customer VMs across a shared physical infrastructure.

In the emerging cloud security space, Craig Balding laid the foundation for cloudsecurity.org (http://cloudsecurity.org/) This site is for people that want to learn more about cloud computing from an enterprise security perspective. Lot of interesting topic, discussion, blog are available and so the interested folks can fetch the benefit out of it

Saturday, July 30, 2011

Cloud Types


So far, we had an overview of microsoft cloud storage and its types. Letz focus on 3 different types of Cloud. They are:
  1. Public Cloud
  2. Private Cloud
  3. Hybrid Cloud
Public Cloud
Cloud storage means different things to different people depending on how it’s implemented. The most common implementation is a ‘public cloud’, which is essentially storage capacity accessed through the internet or a wide area network (WAN) connection, and purchased on an as-needed basis. Users can expand capacity almost without limit, by contacting the provider, which typically operates a highly scalable storage infrastructure, sometimes in physically dispersed locations. Itz from an off-site third-party provider who shares resources and bills on a fine-grained utility computing.

Private Cloud
Private cloud (also called internal cloud or corporate cloud) is a marketing term for a proprietary computing architecture that provides hosted services to a limited number of people behind a firewall. Advances in virtualization and distributed computing have allowed corporate network and datacenter administrators to effectively become service providers that meet the needs of their "customers" within the corporation. Marketing media that uses the words "private cloud" is designed to appeal to an organization that needs or wants more control over their data than they can get by using a third-party hosted service such as Amazon's Elastic Compute Cloud (EC2) or Simple Storage Service (S3) or Windows Azure.

Hybrid Cloud
A hybrid cloud environment consisting of multiple internal and/or external providers will be typical for most enterprises

Sunday, July 17, 2011

Azure Queue


In general, Queue stores messages that may be read by any client who has access to the storage account. Azure queue is a FIFO queue, so when you peek/get a message, it returns the first message (earliest message by time) that was inserted into the queue, problem is this is not guaranteed, so you need to carefully plan your application.

When a new consumer requests a message in Azure queue, the next available visible message is sent to it. In case a consumer asks for a 30sec processing time, but fails to delete the message during this time, the message is marked visible again by the queue and is made available to the next consumer. If the consumer tries to delete a message after the timeout, the queue checks the ‘popreceipt’ to see if it indeed is the most recent requester of the message, if not an exception is thrown.

Azure queue can contain an unlimited number of messages, each of which can be up to 8 KB in size. Messages are generally added to the end of the queue and retrieved from the front of the queue, although first in, first out (FIFO) behavior is not guaranteed.

Windows Azure Queue provides a reliable message delivery mechanism. It provides a simple and asynchronous work dispatch mechanism, which can be used to connect different components of a cloud application. The Windows Azure Queues are highly available, durable and performance efficient. Its programming semantics ensure that a message can be processed at least once. Furthermore, Windows Azure Queue has a REST interface, so that applications can be written in any language and they can access the queue at anytime from anywhere in the Internet via the web.

Saturday, July 9, 2011

Azure Table


Windows Azure Table storage causes a lot of confusion/query among developers becaz of their traditional RDBMS work experience. Most of their experience with data storage is with relational databases that have various tables, each containing a predefined set of columns, one or more of which are typically designated as identity keys

By design, Windows Azure Table services provides the potential to store enormous amounts of data, while enabling efficient access and persistence. A table doesn’t have a specified schema. It’s simply a structured container of rows/entities either stores one particular type or rows with varying structures in a single table, as shown in the attached image. Windows Azure Tables use keys that enable efficient querying and load balancing when the table service decides time to spread your table over multiple servers. How?

In Windows Azure Tables, the string PartitionKey and RowKey properties work together as an index for the table, so when defining them, you must consider how your data is queried. Each entity in a table must have a unique PartitionKey/RowKey combination, acting as a primary key for the row. PartitionKey is used to fetch the result from query. Also used for physically partitioning the tables, which provides for load balancing and scalability. Letz watch an illustration.

Letz considern EmployeeMaster and has PartitionKeys that correspond to job designation types, such as Director, Manager, Developer, etc. During DeveloperSummit, the rows in Developer partition might be very busy (hope of having niche developers!). The service can load balance EmployeeMaster table by moving the Developer partition to a different server for better handling the many requests made to that partition. If you anticipate more activity on that partition than a single server can handle, you should consider creating more-granular partitions such as DeveloperJunior and DeveloperSenior. This is because the unit of granularity for load balancing is the PartitionKey.

Isn't it Cool for load balancing and scalability? Cloud storage developers just deal with data, data, data.. BigData !

Sunday, July 3, 2011

Azure Blob


Binary Large Object (BLOB) is one of the four core storage services that are secure, scalable and easy to access. Itz the simplest way to store text or binary data with Windows Azure.

As the concept, each storage account has access to blob storage. For each account there can be 0..n containers. Each container contains the actual blobs, which is a raw byte array. Containers can be public or private. In public containers, the URLs to the blobs can be accessed over the internet while in a private container, only the account holder can access those blob URLs.

Each Blob can have a set of metadata set as a NameValueCollection of strings. Key terms are explained in the below list:

Storage Account – All access to Windows Azure Storage is done through a storage account. This is the highest level of the namespace for accessing blobs An account can have many Blob Containers

Blob Container – A container provides a grouping of a set of blobs. The container name is scoped by the account.

Blob – Blobs are stored in and scoped by Blob Containers. Each blob can be up to 50GB. A blob has a unique string name within the container. Blobs can have metadata associated with them, which are pairs, and they are up to 8KB in size per blob. The blob metadata can be gotten and set separately from the blob data bits

Sunday, June 26, 2011

Azure Storage


Windows Azure is Microsoft's Cloud Computing offering that serves as the development, service host, and service management environment for the Windows Azure Platform. The Platform is comprised of three pieces: Windows Azure, SQL Azure, and AppFabric.
  • Windows Azure: Cloud-based Operating System which provides a virtualized hosting environment, computing resources, and storage.
  • SQL Azure: Cloud-based relational database management system that includes reporting and analytics.
  • AppFabric: Service bus and access control for connecting distributed applications, including both on-premise and cloud applications

Last week, I wrote about the successful launch of first cloud application http://ganesansenthilvel.cloudapp.net/ in Windows Azure. Now, letz shift our gear towards azure storage. Storage Services provides 3 main services:
  • Blob: Best for blobs like binary files and text files, so think picture storage, rich media storage, documents, etc.
  • Queue: Reliable and persistent messaging between services, so think about this in terms of having a worker role (business tier) communicate with a web role (presentation tier).
  • Table: Structured storage, so think more along the lines of storing entities
Read an interesting article, which compares two cloud storage offerings (google vs microsoft) from price, speed, usability, service level agreement and to developer support
http://gladinet.blogspot.com/2010/01/windows-azure-storage-vs-google-storage.html


Sunday, June 19, 2011

FirstCloudApp in Production




So far, we talked about the basic ingredients to kick start the windows azure application. In last few weeks, we discussed about Service Configuration, Service Definition, Runtime emulators and deployment management portal.


Letz start the first cloud application to move into production. In the New Windows Azure Project dialog of VisualStudio 2010, we need to select ASP.NET Web Role from the list of available roles inside the Roles panel and click the arrow (>) to add an instance of this role to the solution. Before closing the dialog, rename the role as FirstCloud_WebRole. Click OK to create the cloud service solution. It generates the pre defined ASP .NET solution files.


Cloud application has TWO major categories. They are (1) Compute and (2) Storage.


This application is related to compute platform i.e. not related to any cloud specific storage elements. As the first application, it contains the simple compute cloud app with two UI TABs, filled by static content. The layout is shared in the attached image.


On testing the local development version using Compute Emulator, the first cloud application is hosted in the production environment as defined in the management portal session.


My first cloud application is launched successfully in Microsoft data centre. Production URL is hosted as http://ganesansenthilvel.cloudapp.net/ Hurray!

Friday, June 3, 2011

Azure Emulator


The Windows Azure compute emulator enables you to run, test, debug, and fine-tune your application before you deploy it as a hosted service to Windows Azure.

The Windows Azure compute emulator is a local emulator of Windows Azure, so that you can build and test your application before deploying it to the Windows Azure. Some differences exist between running an application in the compute emulator and running it as a hosted service in Windows Azure. The compute emulator does not fully emulate the behavior of the Windows Azure load balancer. You can attach a debugger to a role instance running in the compute emulator, not in windows azure.

The Windows Azure compute emulator requires that you run IIS 7.0 with ASP.NET, but not all of the role services of IIS 7.0 and not all of the features of Windows Server 2008 are installed by default. The services and features that are installed by default are a subset of the services and features that are installed in Windows Azure. Most basic services can run in the compute emulator; however, when creating more advanced services you may need to take additional measures to ensure that your service will behave in the same ways in the cloud as it does when running in the compute emulator.

Architecture of emulator in cloud development is drawn in the attached snap shot.

Monday, May 30, 2011

Azure Management Portal





We build the first cloud application; itz time to host it in Microsoft Cloud center. As the first step, logged into the azure home page (http://www.microsoft.com/windowsazure). On right top location, therez a link called SignIn. Using msn account (ganscloud@hotmail.com), I created an account for azure platform with extra small instance. As of now, itz more than sufficient for my proof of concept (POC) model. For more information on cloud hosting cost, plz refer the site (http://www.microsoft.com/windowsazure/offers/)


Account is succesfully set up in cloud space and the domain is named as 'http://ganesansenthilvel.cloudapp.net/'. Itz time to host after creating the account. To do so, we need to build the cloud application in Release mode of VS2010 IDE. It will create two files at bin\Release\Publish folder namely (1) HelloAzure.cspkg (cloud app package file) and (2) ServiceConfiguration.cscfg (cloud app configuration file).


On getting into cloud management portal, we have two regions to host our first cloud application. They are (1) Staging (2) Production. Staging can be used as pre production i.e. customer acceptance testing zone. Once, the user signed off, Staging app can be easily shifted to Production zone by clicking the middle double arrowed button. Staging URLs are self generated numbers, where as production version is meaningful complete url. In our app, itz ganesansenthilvel.cloudapp.net. Isn't it cool?


Buttons in the attached image, are self descriptive. Upgrade button is used to import cspkg and cscfg files in the cloud space. Suspend button is used to stop the running cloud app. OS Settings is helpful to configure the cloud Operating System.

Saturday, May 21, 2011

ServiceConfiguration




Last week, spoke about ServiceDefinition file and now itz about ServiceConfiguration.csdef file. Letz ellaborate XML elements of this cloud configuration file.


The root element ServiceConfiguration has only one attribute, namely serviceName. The serviceName attribute is mandatory element in ServiceConfiguration file. If you omit this attribute, Visual Studio is going to complain about your XML during compilation. The name of the service must match the name of the service in the service definition.


ServiceConfiguration element can have only Role elements as children. Because the only role we have is ASP.NET Web role, we can see only one Role element. In case our project contained another role the Service Configuration file would reflect this fact.


Role element also has a single name attribute which is required. The name attribute represents the name of the service and must match the name of the service in the service definition defined in the WebRole element. The Role element can have three children elements: Instances, ConfigurationSettings and Certificates. In the attached snap, the number of cloud instance is makred as 1. Certificates are related to the security methodology followed in the related cloud app.


Looking at our ServiceConfiguration.cscfg file we can see that it contains ConfigurationSettings and Instances elements.

Friday, May 13, 2011

ServiceDefinition.csdef




Couple of weeks back, wrote abt hello world program. Letz open it. Under roles, therez a entry called HelloAzure_WebRole. On opening it, the configuration contains Instances section with count, vmsize. After updating the details,open ServiceDefinition.csdef file


The service model is determined by the settings listed in the ServiceDefinition.csdef file and configured in the ServiceConfiguration.cscfg file. The definition file is packaged with the role binaries when the application is prepared for deployment. The ServiceConfiguration.cscfg file is deployed with the package and is used by Windows Azure to determine how the application should run.


By defining settings in the ServiceDefinition.csdef file, you can define the roles and resources for an application. An application that runs as a hosted service in Windows Azure implements one or more instances of the available role types. Within Windows Azure, running instances of a role are replicated across multiple computers to implement all or part of the functionality of the hosted service.


In our cloud app, the instance count setup is get reflected in csdef file as attached.

Sunday, May 8, 2011

Cloud trend in business


Cloud computing isn’t a technology, or even a set of technologies. It is a newly understood paradigm for approaching application delivery. This is why every vendor you can think of has been saying for the past two years that it has cloud products. Itz not invention; but discovery of 1970s time shares methodology with enrichment. We just didn’t put them in the wider context of the global communications network. Now we do.

We do anticipate ongoing centralization. For example, today’s ERP cloud will become tomorrow’s ERP-human resource management cloud, and the day after tomorrow’s corporate private cloud for general company use. The biggest share of dedicated private cloud budgets today comes from hardware investment as indicated in the graph.Nearly all organizations have hardware as part of their private cloud budget, closely followed by software products.

Organization and management become critical issues. And this need for staffing is an ongoing cost, not something you can write off after startup. Much of cloud spending is drawn—and will continue to be drawn—out of non-cloud budgets. This is a perfect example of how cloud computing functions as a framework for building corporate service strategies, rather than serving as a technology unto itself.

Get ready folks into cloud gear via Microsoft certification 70-583 of Designing and Developing Windows Azure Applications.

Tuesday, May 3, 2011

Friendship


This week blog is not about tech cloud; but friendship cloud. When cloud merges, itz joy of rain. On friends merger, itz joy of true love.

After the last year successful get together (http://ganesansenthilvel.blogspot.com/2010/05/gettogether.html) of friends+families, professors, principal, we decided to have the family reunion for 2 days in well-planned manner. The highly expected enthusiastic days came on 2011-May 1st weekend. We had nice time at beach home with lots of fun, teasing, memories, etc. Every homemakers jumped to kitchen for true team food work. kids had undisturbed out door games, swimming, etc. with lot of free environment. Our friends recollected the old days incidents, which couldn't share for years; our ventilated session on Sat night at terrace.

No ego, no expectation, no politics, no vengeance - among true non-sinking friendship. When we played cricket with old stories, itz thrilling/interesting knowledge for our kids. We didn't miss each sec for the cherish moment. Awesome collabrative effort by every friend. Hats off to them!

As the outcome, all homemakers planned to gather in an year without hubbies! Kids wished to continue to have the meet annually. For our friends, back to history of college days! Net to net, itz one of lovely/remembering moment in our life records.

Closure quote is: Friendship is like money, easier made than kept. Count your age by friends.

Tuesday, April 12, 2011

HelloWorld


After last 3 weeks environment related settings, we are ready to make our hands dirty on creating Azure code. Isn’t it cool !!!

Since 1987 of my higher secondary computer lesson (proud to be earlier computer students in India), we used to learn the computer language using the famous ‘Hello World’ program right from quick basic, fortran, cobol, pascal, lisp, c, c++, vc++, java, c#, etc. Same old coding formulae/masala!

After the proper installations of VS2010, AzureSDK, IIS, etc. the hello azure application is created in few seconds with the built-in wizard navigation. Quite simple steps to create the first azure application with the above set up:


  1. Launch Visual Studio 2010 as administrator.

  2. Click File New Project, from the Visual C# installed Cloud templates.

  3. Create Windows Azure Project with the solution name ‘HelloAzure’.

  4. Thatz it; just click OK to launch a wizard.

  5. From the wizard, add the ASP.NET Web Role in C# with ‘HelloAzure_WebRole’.

  6. Run it (Ctrl+F5) and got the output ‘Hello Azure Application’.


Next week, I’ll talk about the interesting topic on Emulators in Azure development platform. Now, letz have a deep dive in the auto generated code piece (in the snap shot). Itz a simple web role (web application) and so 2 projects within the solution namely HelloAzure_WebRole and HelloAzure.

The first project HelloAzure_Webrole is quite simple like other ASP .NET application, comprises of site.master, global.asax, web.config, App_Data, Default.aspx, etc. One new entity is ‘WebRole.cs’ which is the entry point for Azure web role app as public class WebRole : RoleEntryPoint HelloAzure project is new concept. It contains three major components. A) HelloAzure_WebRole configuration, b) ServiceDefinition.csdef and c) ServiceConfiguration.cscfg They are the discussion points for next blog entry. Until then, happy cloud coding!

Sunday, April 3, 2011

aspnet_regiis


After IIS setup, machine throws an error 0xc0000005. Haa, herez the solution.. aspnet_regiis tool.

If you install the .NET Framework on a system that has IIS already installed, IIS is automatically configured to handle requests to ASP.NET pages, and to redirect the execution to the ASP.NET runtime. However, it may happen that you installed the framework on a Windows system where IIS was not already present, and just later decided to add IIS. Registering ASP.NET on IIS is not just a matter of associating the various .aspx, .asmx, .axd, .ashx and the other ASP.NET extensions to the aspnet_isapi.dll ISAPI, more has to be done to create the ASP.NET account and to set it for ASP.NET requests, register the ISAPI itself and other stuff. Doing all this manually can be a difficult operation, and requires a good understanding of many details. Fortunately there is an utility, shipped with the .NET Framework but not documented, that can take care of these configuration chores for you. The utility is aspnet_regiis.exe, it is located under %WindowsDir%\Microsoft.NET\Framework\vx.y.zzzz\ and you should call it with the -i parameter: aspnet_regiis.exe -i

The installation of .NET Framework and registration of the Framework in ISS needs some steps like script mapping: assotiation of different .NET specitical extensions. The installation can run without the step (switch -ir), but it has more sense to skip the steps only if a previous version of ASP.NET is already registered with IIS. Exactly this do aspnet_regiis.exe -iru. If no previous version of ASP.NET is registered with IIS, then aspnet_regiis.exe -iru do make all registrations in IIS and work like aspnet_regiis.exe -i. If do a previous version of ASP.NET is already registered in IS the aspnet_regiis.exe -iru work like aspnet_regiis.exe -ir. The usage of the -u parameter "uninstalls the version of ASP.NET that is associated with the ASP.NET IIS Registration tool from the computer. Existing script maps to this version of the ASP.NET ISAPI are automatically remapped to the most recent remaining ASP.NET ISAPI version installed."

Sunday, March 27, 2011

Web server setup


IIS 7 is the Web server used in Windows 7. IIS 7 has a completely modular setup design that enables control over the footprint of a Web server. IIS can be easily configured via Start->Control Panel->Programs ->Programs and Features ->Turn Windows features on or off.

Next important step is to understand the significance of each IIS option. It helps us to configure IIS for Azure development platform.

FTP Publishing Service: Installs File Transfer Protocol (FTP) Service and FTP management console. Provides support for upload and download of files.

Web Server Management Tools: Installs Web server management console and tools. It contains IIS Management Console, IIS Management scripts Tools and Management Service

Application Development Features: Installs support for application development such as ASP.NET, Classic ASP, CGI, and ISAPI extensions.

Common HTTP Features: Installs support for static Web server content such as HTML & image files, custom errors, and redirection.

Health & Diagnostics: Enables you to monitor and manage server, site, and application health such as HTTP Logging, Request Monitor, Tracing, etc.

Performance Features: Static Content Compression and Dynamic Content Compression before returning it to a client.

Security: Enables additional security protocols to secure servers, sites, applications, vdirs, and files such as Windows Authentication, Client Certificate Mapping Authentication, etc.

For cloud app development, I enabled Web Management Tools->IIS Management Console and all Application Development & Common HTTP Features. Apart from these, Request Filtering and Windows Authentication of Security + Static Content Compression of Performance Features. My dev box setup is attached as the snapnshot.

Sunday, March 20, 2011

Local software installations


Last week blog, I was talking about free trial version of extra small instance, supplied by Microsoft Azure team. To start the journey, desktop is loaded with Windows 7 OS (Ultimate version) with 32 bit processing mode. Hardware spec contains Intel i3 processor, 4 GB RAM and 500 GB disk drives.

Once the desktop and OS is ready, next question is ‘where did azure get started?’ As far as azure software application development is concerned, Microsoft supports various languages like Java, Python, PHP, .NET, etc. I’ve little knowledge in .NET when compared to others and so installed Visual Studio 2010 IDE (Integrated Development Environment) targeting C# code. Visual Studio 2010 is the first software installation step.

Next major step is to install Tools and SDK. SDK stands for Software Development Kit. To develop Azure application, we need to install Azure SDK, supplied by Microsoft Azure team. Itz freely download able at azure download page. Installation is pretty simple wizard to run thro.

After Azure SDK, we need to install the tools for Visual Studio 2010. Itz also pretty straight forward to install. Logically, thatz it. Set and ready to launch!

Logically; but reality itz a big boom. What happened? I got few interesting error not to allow the installation steps. Why? Few pre requisite were missing. Whatz that?

After VS2010 installation, first enable IIS 7.0 with ASP .NET settings. As the second step, we need to do the proper SDK installation based on the local system either 32 bit SDK (x86) or 64 bit SDK (x64). Third step is to install the hotfixes for VisualStudio (KB983301) and Windows (KB981002) based on the underlying OS bits (x86 or x64). Fourth step is setup the next generation identity framework Windows Identity Framework(WIF), which enables identity and access management solution built on Active Directory (AD). Fifth step is to install ApplicationFabric SDK, which comprises of access control and service bus.
After the above vital 5 steps, Windows Azure tools for Visual Studio installs successfully and the development environment is ready to launch!

Saturday, March 12, 2011

Web coding threat


Cloud computing is based on the Internet landscape. The threats within the Internet may be growing, but our ability to write efficient web secured code provides a significant advantage to the end users.

Microsoft pattern and practices talks about improving web application security: Threats and Countermeasures gives you a solid foundation for designing, building, and configuring secure ASP.NET Web applications. Whether you have existing applications or are building new ones, you can apply the guidance to help you make sure that your Web applications are hack-resilient. Itz published at http://msdn.microsoft.com/en-us/library/ff649874.aspx


It shows you how to review code built using the .NET Framework for potential security vulnerabilities. It shows you the specific review questions to ask and discusses the tools that you should use. In addition to general coding considerations, it contains review questions to help you review your applications for cross-site scripting, SQL injection and buffer overflow vulnerabilities

Code reviews should be a regular part of your development process. Security code reviews focus on identifying insecure coding techniques and vulnerability that could lead to security issues. The review goal is to identify as many potential security vulnerability as possible before the code is deployed. The cost and effort of fixing security flaws at development time is far less than fixing them later in the product deployment cycle.

Saturday, March 5, 2011

Cloud Essentials


Few of my blog followers, motivated me to shift the gear from .NET Framework internals to the latest topic. Hmmm.

I had little aspiration to make my hands dirty on Azure platform.Coincidentally, Microsoft launched a new deal for the free Windows Azure Platform Trial till June 30th, 2011. This developer motivating deal comprises the listed benefits.
Compute: 750 hours of an Extra Small Compute Instance, 25 hours of a Small Instance
Storage: 500MB, 10k Storage transactions
Data Transfers: 500MB in / 500MB out
Relational Database: 1G Web Edition SQLAzure database (for 90 days only)
Access Control transactions: 100k
Service Bus connections: 2

Additional drives are thro industry news like Azure has 31K customers. Information week quotes an article that Microsoft's Azure cloud data center outside Chicago was able to execute the required steps in 10.142 seconds. It named Azure as the faster cloud service on comparison of 13 vendors ('http://www.informationweek.com/news/cloud-computing/infrastructure/showArticle.jhtml?articleID=229300184&cid=RSSfeed_IWK_News')

With all the above drivers, letz kick start this series. Any cloud component either hardware or software, contains 5 essential characteristics
1. On Demand self service
Without interacting the service provider, the customer should be able to add/delete the required resources on few clicks.
2. Broad Network access
Serviced resources are executed via wire/internet; not as client based usage.
3. Rapid Elasticity
Rapidly scaling is facilitated to meet the customer demand within a verylittle time frame.
4. Resource Pooling
Shared usage at multi tenant policy like car pool
5. Measured service
Metered charges based on the usage by the customer need.

Cloud computing provides 3 service models. They are (a) SaaS(Software as a Service) (b) PaaS (Platform as a Service) (c) IaaS (Infra as a Service). They adhere to the above listed 5 essential characteristics.

On lighter side, home R&D environment is getting launched and ready for Azure (Microsoft Cloud) coding!

Saturday, February 26, 2011

CTS Spec


Types expose functionality to your applications and other types. Types are the mechanism by which code written in one programming language can talk to code written in a different programming language.

Because types are at the root of the CLR, Microsoft created a formal specification—the Common Type System (CTS)—that describes how types are defined and how they behave.

The CTS specification states that a type can contain zero or more members.
  • Field A data variable that is part of the object’s state. Fields are identified by their name and type.
  • Method A function that performs an operation on the object, often changing the object’s state. Methods have a name, a signature, and modifiers. The signature specifies the number of parameters (and their sequence), the types of the parameters, whether a value is returned by the method, and if so, the type of the value returned by the method.
  • Property To the caller, this member looks like a field. But to the type implementer, it looks like a method (or two). Properties allow an implementer to validate input parameters and object state before accessing the value and/or calculating a value only when necessary. They also allow a user of the type to have simplified syntax. Finally, properties allow you to create read-only or write-only “fields."
  • Event An event allows a notification mechanism between an object and other interested objects. For example, a button could offer an event that notifies other objects when the button is clicked.

Sunday, February 20, 2011

Publisher Policy


In general, the publisher of an assembly simply sent a new version of the assembly to the administrator, who installed the assembly and manually edited the application’s or machine’s XML configuration files. In general, when a publisher fixes a bug in an assembly, the publisher would like an easy way to package and distribute the new assembly to all of the users. But the publisher also needs a way to tell each user’s CLR to use the new assembly version instead of the old assembly version. Sure, each user could modify his or her application’s or machine’s XML configuration file, but this is terribly inconvenient and error prone. What the publisher needs is a way to create policy information that is installed on the user’s computer when the new assembly is installed.

A publisher policy assembly is a way for a publisher to make a statement about the compatibility of different versions of an assembly. If a new version of an assembly isn't intended to be compatible with an earlier version, the publisher shouldn't create a publisher policy assembly. In general, use a publisher policy assembly when you build a new version of your assembly that fixes a bug. You should test the new version of the assembly for backward compatibility. On the other hand, if you’re adding new features to your assembly, you should consider the assembly to have no relationship to a previous version, and you shouldn't ship a publisher policy assembly. In addition, there’s no need to do any backward compatibility testing with such an assembly.

Sunday, February 13, 2011

.NET Culture


Like version numbers, assemblies also have a culture as part of their identity. Cultures are identified via a string that contains a primary and a secondary tag. As the samples, German's
Primary Tag-de and CH as Secondary Tag; for British English itz en GB

In general, if you create an assembly that contains code, you don’t assign a culture to it. This is because code doesn’t usually have any culture-specific assumptions built into it. An assembly that isn’t assigned a culture is referred to as being culture neutral.

Now you can create one or more separate assemblies that contain only culture-specific resources—no code at all. Assemblies that are marked with a culture are called satellite assemblies. For these satellite assemblies, assign a culture that accurately reflects the culture of the resources placed in the assembly. You should create one satellite assembly for each culture you intend to support.

You’ll usually use the AL.exe tool to build a satellite assembly. You won’t use a compiler because the satellite assembly should have no code contained within it. When using AL.exe, you specify the desired culture by using the /c[ulture]:text switch, where text is a string such as “en-US,” representing U.S. English. When you deploy a satellite assembly, you should place it in a subdirectory whose name matches the culture text.

If you prefer, you can specify the culture by using the System.Reflection. AssemblyCultureAttribute custom attribute instead of using AL.exe’s /culture switch as
[assembly:AssemblyCulture("en-US")]

Sunday, February 6, 2011

Salute to Master


CSIR (Council of Scientific and Industrial Research) honoured me by inviting to share the recent technology in IT industry. Itz 2 days South India conference. On Thursday (3rd Feb), I shared the latest technology in Microsoft platform.

Though I used to take sessions to college student community frequently, this is the first time in my life to run an official conference. To be frank, itz kind of bit nervous during the inaugural function. They had 2 industry speakers per day. During the inaugural process, I was one of the celebrity for the lighting (kuthu vilaku) event. After the official introduction, the huge conference stage was only to me. My agenda was to share overview at 11-12:30, 3-4 deep dive and 5-5:30 hands on demo. In between, the other speaker covered.

On attending some interactive, impressive, inspiring sessions in my career, I thought of following the same methodology. Took the packet collar mike, walked inside the crowd, asked few interactive questions, shared reward(jst choc) to them. Sessions reached quite well. My prof Ms TamilSelvi was part of the crowd and enjoyed the transition in her product on last 20 years.

Committee shared a wonderful momento with overwhelming positive feedback and standing applause from the audience. When they shared vote of thanks at EOD (End Of the Day), I requested few mins for me. Exact script was as below:

  • Itz immense pleasure for me to get this opportunity. Thanx to everyone in this conf.
  • Exactly 2 decades ago, same person don't know how to speak cont in English, how to draft the presentation, how to connect audience. Today's credit goes to all my mentors especially my first one Prof TamilSelvi
  • Thanx everyone!

On seeing my prof in 3rd row, she is full of tears (with joy). Jst before leaving conf, I told her that we used to feel proud when our IT products are appreciated by end customers. And today itz your turn, Mam. One of the glourious moments in my life. Salute to true masters!

PS: On Feb 1991 (2nd year of UG), I was selected for my first tech presentation (RISC/CISC) in TCE, Madurai. Felt worried and reluctant. My prof coached/groomed for 2 weeks to host on the stage. Now, itz Feb 2011.

Sunday, January 30, 2011

Runtime resolves Type References


public static void Main() {
System.Console.WriteLine("Hi");
}

The above code is compiled and built into an assembly, say Program.exe. When you run this application, the CLR loads and initializes. Then CLR reads the assembly’s CLR header, looking for the MethodDefToken that identifies the application’s entry point method (Main). From the MethodDef metadata table, the offset within the file for the method’s IL code is located and JIT-compiled into native code, which includes having the code verified for type safety. The native code then starts executing. ILDasm.exe contains the attached left side images.

When JIT-compiling this code, the CLR detects all references to types and members and loads their defining assemblies (if not already loaded). As you can see, the IL code above has a reference to System.Console.WriteLine. Specifically, the IL call instruction references metadata token 0A000003. This token identifies entry 3 in the MemberRef metadata table (table 0A). The CLR looks up this MemberRef entry and sees that one of its fields refers to an entry in a TypeRef table (the System.Console type). From the TypeRef entry, the CLR is directed to an AssemblyRef entry: “mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089”. At this point, the CLR knows which assembly it needs.

Now the CLR must locate the assembly in order to load it. When resolving a referenced type, the CLR can find the type in one of three places:
  1. Same file: Access to a type that is in the same file is determined at compile time (sometimes referred to as early bound). The type is loaded out of the file directly, and execution continues.
  2. Different file same assembly: The runtime ensures that the file being referenced is in the assembly’s FileRef table of the current assembly’s manifest. The runtime then looks in the directory where the assembly’s manifest file was loaded. The file is loaded, its hash value is checked to ensure the file’s integrity, the type’s member is found, and execution continues.
  3. Different file, different assembly: When a referenced type is in a different assembly’s file, the runtime loads the file that contains the referenced assembly’s manifest. If this file doesn’t contain the type, the appropriate file is loaded. The type’s member is found, and execution continues.

Friday, January 21, 2011

GacUtil


If an assembly is to be accessed by multiple applications, the assembly must be placed into a well-known directory, and the CLR must know to look in this directory automatically when a reference to the assembly is detected. This well-known location is called the global assembly cache (GAC), which can usually be found in c:\windows\assembly directory

The GAC directory is structured: It contains many subdirectories, and an algorithm is used to generate the names of these subdirectories. You should never manually copy assembly files into the GAC; instead, you should use tools to accomplish this task. These tools know the GAC’s internal structure and how to generate the proper subdirectory names. While developing and testing, the most common tool for installing a strongly named assembly into the GAC is GACUtil.exe.

you can invoke GACUtil.exe, specifying the /i switch to install an assembly into the GAC, and you can use GACUtil.exe’s /u switch to uninstall an assembly from the GAC. Note that you can’t ever place a weakly named assembly into the GAC. If you pass the file name of a weakly named assembly to GACUtil.exe, it displays the following error message: “Failure adding assembly to the cache: Attempt to install an assembly without a strong name.”

Saturday, January 8, 2011

Deployment Modes


There are two kinds of deployment in .NET framework.

They are (1) private deployment, in which assemblies are placed in the application’s base directory (or a subdirectory thereof) for the application’s sole use. Deploying assemblies privately gives a company a large degree of control over the naming, versioning, and behavior of the assembly.

(2) global deployment, in which assemblies can be accessed by multiple applications. The assemblies that ship with the Microsoft .NET Framework are an excellent example of globally deployed assemblies, because all managed applications use types defined by Microsoft in the .NET Framework Class Library (FCL).

Key differences are listed as: A private deployment is available to a particular applications where they are kept. And cannot be references outside the scope of the folder where they are kept. In contrast, global deployment are accessible globally/shared across the machine to all the applications. For using those assemblies you need to register the assembly with a strong name in the global assembly cache(GAC) using gacutil.exe. GAC can be found on all the computers with .Net framework installed. Therez an alternative without GAC and the solution is published at http://devcity.net/Articles/254/1/article.aspx. Based on the business need, you can choose the deployment mode.