I am having around 10+ years of experience in IT industry, I describe myself as a hard worker, committed, dynamic and an adaptable person who loves to accept challenges and who finds out innovative ways to solve/tackle any problem. Having bachelor’s degree from Lucknow University, post graduate diploma (PGDCA) from DOEACC, and MCSD certifications in .Net2.0 where I majored in Computer. I strongly believe that I can make a very meaningful and impactful contribution to your reputed organization.
Monday, February 20, 2006
Instructions : What you do is find out what each letter of your name
means.
Then connect all the meanings and it describes YOU. (Its TRUE) & (Is'nt it GREAT !!)
PS : If you have double or triple letters, just count the meaning
once.
For Example : RAMA
R - You are a social butterfly.
A - You can be very quiet when you have something on your mind.
M - Success comes easily to you.
----------------------------------------------------------------------------------
A = You can be very quiet when you have something on your mind.
B = You are always cautious when it comes to meeting newpeople.
C = You definitely have a partier side in you, don't be shy to show it.
D = You have trouble trusting people.
E =You are a very exciting person.
F = Everyone loves you.
G = You have excellent ways of viewing people.
H =You are not judgmental.
I =You are always smiling and making others smile.
J =Jealously
K =You like to try new things.
L = Love is something you deeply believe in.
M = Success comes easily to you.
N = You like to work, but you always want a break.
O = You are very open-minded.
P =You are very friendly and understanding.
Q = You are a hypocrite.
R =You are a social butterfly.
S = You are very broad-minded.
T = You have an attitude, a big one.
U = You feel like you have to equal up to people's standards.
V = You have a very good physique and looks.
W = You like your privacy.
X = You never let people tell you what to do.
Y = You cause a lot of trouble.
Z = You're always fighting with someone.
CHECK YOUR NAME MEANING AND YOU WILL FIND THAT THIS IS TRUE.............
What is Thread Pooling?
Thread pooling is the process of creating a collection of threads during the initialization of a multithreaded application, and then reusing those threads for new tasks as and when required, instead of creating new threads. The number of threads for the process is usually fixed depending on the amount of memory available, and the needs of the application. However, it might be possible to increase the number of available threads. Each thread in the pool is given a task and, once that task has completed, the thread returns to the pool and waits for the next assignment.
The Need for Thread Pooling
Thread pooling is essential in multithreaded applications for the following reasons.
-
Thread pooling improves the response time of an application as threads are already available in the thread pool waiting for their next assignment and do not need to be created from scratch
-
Thread pooling saves the CLR from the overhead of creating an entirely new thread for every short-lived task and reclaiming its resources once it dies
-
Thread pooling optimizes the thread time slices according to the current process running in the system
-
Thread pooling enables us to start several tasks without having to set the properties for each thread
-
Thread pooling enables us to pass state information as an object to the procedure arguments of the task that is being executed
-
Thread pooling can be employed to fix the maximum number of threads for processing a particular request
The Concept of Thread Pooling
One of the major problems affecting the responsiveness of a multithreaded application is the time involved in spawning threads for each task.
For example, a web server is a multithreaded application that can service several client requests simultaneously. Let's suppose that ten clients are accessing the web server at the same time:
-
If the server operates a one thread per client policy, it will spawn ten new threads to service these clients, which entails the overhead of first creating those threads and then of managing them throughout their lifetime. It's also possible that the machine will run out of resources at some point.
-
Alternatively, if the server uses a pool of threads to satisfy those requests, then it will save the time involved in the spawning of those threads each time a request from a client comes in. It can manage the number of threads created, and can reject client requests if it is too busy to handle them. This is exactly the concept behind thread pooling.
The .NET CLR maintains a pool of threads for servicing requests. If our application requests a new thread from the pool, the CLR will try to fetch it from the pool. If the pool is empty, it will spawn a new thread and give it to us. When our code using the thread terminates, the thread is reclaimed by .NET and returned to the pool. The number of threads in the thread pool is limited by the amount of memory available.
To recap then, the factors affecting the threading design of a multithreaded application are:
-
The responsiveness of the application
-
The allocation of thread management resources
-
Resource sharing
-
Thread synchronization
Responsiveness of the application and resource sharing are addressed by this chapter on thread pooling. The remaining factors have been covered in the previous chapters of this book.
The CLR and Threads
The CLR was designed with the aim of creating a managed code environment offering various services such as compilation, garbage collection, memory management, and, as we'll see, thread pooling to applications targeted at the .NET platform.
Indeed, there is a remarkable difference between how Win32 and the .NET Framework define a process that hosts the threads that our applications use. In a traditional multithreaded Win32 application, each process is made up of collections of threads. Each thread in turn consists of Thread Local Storage (TLS) and Call Stacks for providing time slices in the case of machines that have a single CPU. Single processor machines allot time slices for each thread to execute based on the thread priority. When the time slice for a particular thread is exhausted, it is suspended and some other thread is allowed to perform its task. In the case of the .NET Framework, each Win32 process can be sub-divided logically into what are known as Application Domains that host the threads along with the TLS and call stack. It's worthwhile to note that communication between application domains is handled by a concept called Remoting in the .NET Framework.
Having gained a basic understanding on concepts of thread pooling and the .NET process, let's dig into how the CLR provides us with thread pooling functionality for .NET applications.
The Role of the CLR in Thread Pooling
The CLR forms the heart and soul of the .NET Framework offering several services to managed applications, thread pooling being one of them. For each task queued in the thread pool (work items), the CLR assigns a thread from the pool (a worker thread) and then releases the thread back to the pool once the task is done.
Thread pools are always implemented by the CLR using a multithreaded apartment (MTA) model by employing high performance queues and dispatchers through preemptive multitasking. This is a process in which CPU time is split into several time slices. In each time slice, a particular thread executes while other threads wait. Once the time slice is exhausted, other threads are allowed to use the CPU based on the highest priority of the remaining threads. The client requests are queued in the task queue and each item in this queue is dispatched to the first available thread in the thread pool.
Once the thread completes its assigned task, it returns to the pool and waits for the next assignment from the CLR. The thread pool can be fixed or of dynamic size. In the former case, the number of threads doesn't change during the lifetime of the pool. Normally, this type of pool is used when we are sure of the amount of resources available to our application, so that a fixed number of threads can be created at the time of pool initialization. This would be the case when we are developing solutions for an intranet or even in applications where we can tightly define the system requirements of the target platform. Dynamic pool sizes are employed when we don't know the amount of resources available, as in the case of a web server that will not know the number of client requests it will be asked to handle simultaneously.
Caveats to Thread Pooling
There is no doubt that thread pooling offers us a lot of advantages when building multithreaded applications, but there are some situations where we should avoid its use. The following list indicates the drawbacks and situations where we should avoid using thread pooling:
-
The CLR assigns the threads from the thread pool to the tasks and releases them to the pool once the task is completed. There is no direct way to cancel a task once it has been added to the queue.
-
Thread pooling is an effective solution for situations where tasks are short lived, as in the case of a web server satisfying the client requests for a particular file. A thread pool should not be used for extensive or long tasks.
-
Thread pooling is a technique to employ threads in a cost-efficient manner, where cost efficiency is defined in terms of quantity and startup overhead. Care should be exercised to determine the utilization of threads in the pool. The size of the thread pool should be fixed accordingly.
-
All the threads in the thread pool are in multithreaded apartments. If we want to place our threads in single-thread apartments then a thread pool is not the way to go.
-
If we need to identify the thread and perform various operations, such as starting it, suspending it, and aborting it, then thread pooling is not the way of doing it.
-
Also, it is not possible to set priorities for tasks employing thread pooling.
-
There can be only one thread pool associated with any given Application Domain.
-
If the task assigned to a thread in the thread pool becomes locked, then the thread is never released back to the pool for reuse. These kinds of situations can be avoided by employing effective programmatic skills.
The Size of the Thread Pool
The .NET Framework provides the ThreadPool class located in the System.Threading namespace for using thread pools in our applications. The number of tasks that can be queued into a thread pool is limited by the amount of memory in your machine. Likewise, the number of threads that can be active in a process is limited by the number of CPUs in your machine. That is because, as we already know, each processor can only actively execute one thread at a time. By default, each thread in the thread pool uses the default task and runs at default priority in a multithreaded apartment. The word default seems to be used rather vaguely here. That is no accident. Each system can have default priorities set differently. If, at any time, one of the threads is idle then the thread pool will induce worker threads to keep all processors busy. If all the threads in the pool are busy and work is pending in the queue then it will spawn new threads to complete the pending work. However, the number of threads created can't exceed the maximum number specified. By default, 25 thread pool threads can be created per processor. However, this number can be changed by editing the CorSetMaxThreads member defined in mscoree.h file. In the case of additional thread requirements, the requests are queued until some thread finishes its assigned task and returns to the pool. The .NET Framework uses thread pools for asynchronous calls, establishing socket connections, and registered wait operations.
Saturday, February 18, 2006
There is more to object-oriented programming than simply encapsulating in an object some data and the procedures for manipulating those data. Object-oriented methods deal also with the classification of objects and they address the relationships between different classes of objects.
The primary facility for expressing relationships between classes of objects is derivation--new classes can be derived from existing classes. What makes derivation so useful is the notion of inheritance. Derived classes inherit the characteristics of the classes from which they are derived. In addition, inherited functionality can be overridden and additional functionality can be defined in a derived class.
A feature of this book is that virtually all the data structures are presented in the context of a single class hierarchy. In effect, the class hierarchy is a taxonomy of data structures. Different implementations of a given abstract data structure are all derived from the same abstract base class. Related base classes are in turn derived from classes that abstract and encapsulate the common features of those classes.
In addition to dealing with hierarchically related classes, experienced object-oriented designers also consider very carefully the interactions between unrelated classes. With experience, a good designer discovers the recurring patterns of interactions between objects. By learning to use these patterns, your object-oriented designs will become more flexible and reusable.
Recently, programmers have to started name the common design patterns. In addition, catalogs of the common patterns are now being compiled and published.
The following object-oriented design patterns are used throughout this text:
- Containers
- Enumerators
- Visitors
- Cursors
- Adapters
- Singletons
A container is an object that holds within it other objects. A container has a capacity, it can be full or empty, and objects can be inserted and withdrawn from a container. In addition, a searchable container is a container that supports efficient search operations.
Enumerators
An enumerator provides a means by which the objects within a container can be accessed one-at-a-time. All enumerators share a common interface, and hide the underlying implementation of the container from the user of that container.
Visitors
A visitor represents an operation to be performed on all the objects within a container. All visitors share a common interface, and thereby hide the operation to be performed from the container. At the same time, visitors are defined separately from containers. Thus, a particular visitor can be used with any container.
Cursors
A cursor represents the position of an object in an ordered container. It provides the user with a way to specify where an operation is to be performed without having to know how that position is represented.
Adapters
An adapter converts the interface of one class into the interface expected by the user of that class. This allows a given class with an incompatible interface to be used in a situation where a different interface is expected.
Singletons
A singleton is a class of which there is only one instance. The class ensures that there only one instance is created and it provides a way to access that instance.
Thursday, February 16, 2006
Application Configuration Files
The application configuration files (web.config and the site-wide machine.config) have had a number of changes, including new attributes added to existing elements as well as new elements to support the new features. The changed sections of the configuration files cover the topics listed below:
- Client targets
- Compilation
- Build providers
- Web proxy
- HTTP modules
- HTTP runtime
- HTTP handlers
- Globalization
- Pages
- Session state
- Web request modules
The new sections cover the following topics:
- Anonymous identification
- Code DOM
- Connection strings
- Data
- Caching
- Expression builders
- Hosting
- Image generation
- HTTP cookies
- Membership
- Site maps
- Site counters
- Personalization Profile
- Protocol bindings
- Role Manager
- Mail servers
- URL mappings
- Web Parts
- Web Site Administration Tool
- Protected data
- Health monitoring
Tuesday, February 14, 2006
What's Wrong with ASP.NET 1.x?
ASP.NET 2.0 addresses the areas that both the development team and users wanted to improve. The aims of the new version are listed below.
- Reduce the number of lines of code required by 70%. The declarative programming model freed developers from having to write reams of code, but there are still many scenarios where this cannot be avoided. Data access is a great example, where the sameConnection, DataAdapter/DataSet, and Command/DataReader code is used regularly.
- Increase developer productivity. This partly relates to reducing the amount of code required but is also affected by more server controls encompassing complex functionality, as well as providing better solutions for common Web site scenarios (such as portals and personalized sites).
- Use a single control set for all devices. Mobile devices are becoming more pervasive, with an increasing number of new devices. Many of the server controls render appropriately for small screens, but there are two major problems with the current support for mobile devices: (1) having a separate set of server controls purely for mobile devices is not only confusing but also costly, and (2) adding support for new devices requires additional development work and maintenance. ASP.NET 2.0 will provide a single set of controls and an extensible architecture to allow them (and other controls) to support multiple devices.
- Provide the fastest Web server platform. Although ASP.NET 1.0 offers a fast server platform, ASP.NET 2.0 will improve areas such as application start-up times and provide better application tracing and performance data. Innovative caching features will enhance application performance, especially when SQL Server is used.
- Provide the best hosting solution. With the large number of Internet applications being hosted, it's important to provide better solutions for hosters. For example, better management features to identify and stop rogue applications will give hosters more control over their current environment. More control can also be given to hosted companies by use of the new Web-based administration tool, allowing users to easily control the configuration of applications remotely.
- Provide easier and more sophisticated management features. Administration of ASP.NET applications under version 1.x required manual editing of the XML configuration file, which is not a great solution for administrators. Version 2.0 brings a graphical user interface–based administration tool that is integrated with the Internet Information Services (IIS) administration tool.
- Ease implementation of entire scenarios. The better management features are built on top of a management application programming interface (API), allowing custom administration programs to be created. Along with application packaging this will provide support for easily deployable applications, with or without source.
Even from this broad set of aims you can see that ASP.NET 2.0 is a great advance from 1.x for both developers and administrators.