« August 2011  Main  October 2011 »
I went over this book to get some idea on STP. I was completely clueless on the things that happen at the back end, once the trade is placed with a broker. STP was introduced in Indian Markets by FT and few other vendors almost a decade ago. It has lead to a clear improvement in operational efficiency. Even though this book talks about .NET platform, I liked the chapter on STP that explained things using business case and .NET code.
STP
STP Interoperability
The entire structure of the book can be summarized using this visual.
Will go over the contents in greater detail at sometime in the future.
Posted at 03:11 AM in Books, Finance, Technology  Permalink  Comments (0)  TrackBack (0)
Posted at 02:42 AM in Talks  Permalink  Comments (0)  TrackBack (0)
The core message of this book by Seth Godin, is “Mass Marketing is dead and its an era of marketing to wierds”. Seth clarifies the word wierd by defining as “ people who have chosen to avoid conforming to the masses, at least in some parts of their lives”. In one sense the book is a rehash of Seth’s other books There are some examples and illustrations that might motivate a reader to go over this 90 odd page book. The book starts off with an example of a zoo in Belgium.The zoo facing with dwindling crowds and hence falling revenues, pulls off a mass marketing campaign by focusing on “ A pregnant elephant”.This sort of marketing brings back the crowds to the zoo, but such examples are an exception than the rule, says Seth. He describes a few situations that capture the changing market place both in the online and offline world. Here is a sample :
I’m standing at the corner of First Avenue and 8th Street in Manhattan. Is there a more vibrant commercial corner in New York? Look, there’s the Islamic cultural center across the street, right next to the Atomic Wings buffalo chicken wings joint. Around the corner is Veniero’s, a fabled Italian bakery. I can see a group of twenty Chinese tourists out front, just hanging out. Behind me is Momofuku, a modern take on the traditional Japanese noodle bar, run by a Korean chef who grew up in Virginia. Crossing the street is a rich uptown lawyer walking handinhand with a tattooed downtown kid (or maybe it’s a tattooed lawyer and a welldressed tourist). The sidewalk is so crowded you have to stand in the street if you want to stand still. I haven’t even mentioned the place selling hydroponic herbs, the guy who sells fresh bulbs of turmeric, or the Nuyorican Poets Café, where poets, famous and unknown, come to jam while the owner heckles from the back. There are glutenfree bakeries, extragluten bakeries, vegan bakeries, and probably a few people selling hash brownies as well. There’s nothing going on around me that would be considered normal by a timetraveling visitor from 1965 (except maybe the brownies). Instead, there are collisions of ideas and cultures
Seth offers 4 reasons for the marketplace getting weirder:
The gradual shift in the behavior is illustrated by a caption “ Bell Curve is spreading”.
1950’s 1960’s : Distribution of behaviors was tightly grouped
19701980’s : Distribution of behaviors was spreading
BELL CURVE IS GONE!
By 2010, the distribution of behaviors had spread to the point where there was more weird outside the box than normal inside it.
Now, as the curve spreads, the geeks are even geekier. They don’t want the new thing; they want the beta version. They don’t want the thing that’s new; they want the thing that’s so new that the other geeks don’t even know about it. And a geek with an eye for denim might be specializing—denim isn’t enough, it has to be selvedge denim, or even better, selvedge denim from Japan.
Seth warns people who cater to the mass by saying:
We have all become weird, our behavior, tastes and preferences, hobbies are becoming weirder. Be it in business / creating art, those who adopt a strategy of pursuing the weird have consistently outperformed those who seek to create mass.
Posted at 12:47 AM in Books, Marketing Trends  Permalink  Comments (0)  TrackBack (0)
The author of this book, Thomas M. Sterner, is a Piano technician for a major performing arts center. The author admits that , in his career spanning 25 years, he had faced a lot of challenges to keep himself disciplined and focused (considering the act of tuning and maintaining a piano , an instrument that has 88 notes , is repetitious and tedious by nature). Over a period of time, the author develops a mindset that has kept him productive.Through this book, Sterner tells his method of remaining disciplined.
Process not the Product
In any form of practice, it is important to focus on the process and not on the product. There are umpteen variations of this statement that one gets to hear.The author does acknowledge this fact but goes on to add his own flavor to this statement. He uses music as an example to show that focusing on the process takes care of the product , but not viceversa. He says
Keep yourself processoriented. Stay in the present. Make the process the goal and use the overall goal as a rudder to steer you efforts. Be deliberate, have an intention about what you want to accomplish, and be aware of that intention. Doing these things will eliminate the judgments and emotions that come from a productoriented or resultsoriented mind.
It’s How You Look At It
The author gives a beautiful analogy of a flower to shift one’s perspective towards everything in life. He says
Ask yourself: at what point in a flower's life, from seed to full bloom, has it reached perfection? Let's look at this right now and see what nature is teaching us every day as we walk past the flowers in our garden. At what point is a flower perfect? Is it when it is nothing more than a seed in your hand waiting to be planted? All that it will ever be is there in that moment. Is it when it first starts to germinate unseen under several inches of the soil? This is when it displays the first visible signs of the miracle we call creation. How about when it first pokes its head through the surface and sees the face of the sun for the first time? All of its energies have gone into reaching for this source of life; until this point, it has had nothing more than an inner voice telling it which way to grow to find it. What about when it begins to flower? This is when its own individual properties start to be seen. The shape of the leaves, the number of blooms are all unique to just this one flower, even among the other flowers of the same species. Or is it the stage of full bloom, the crescendo of all of the energy and effort it took to reach this point in its life? Let’s not forget that humble and quiet ending when it returns to the soil from where it came. At what point is the flower perfect? I hope you already know that the answer is that it is always perfect.
Using this example, the author suggests that by following presentminded approach , one can experience a tremendous relief from the fictitious, selfimposed pressures and expectations that only slow one’s progress.
Perception Changes Create Patience
Patience is probably at the top of everyone’s list of most soughtafter virtues. One of the reasons that we become impatient is we step out of the NOW There is a saying that states that most of what we worry about never comes to pass. Thinking about a situation before you are in it only scatters your energy. The first step toward patience is to become aware of when your internal dialog is running wild and dragging you with it. If you are not aware of this when it is happening, which is probably most of the time, you are not in control. The second part is understanding and accepting that there is no such thing as reaching a point of perfection in anything. True perfection is both always evolving and at the same time always present within you, just like the flower.
Progress is a natural result of staying focused on the process of doing anything. When you stay on purpose, focused in the present moment, the goal comes to you with frictionless ease. However, when you constantly focus on the goal you are aiming for, you push it away instead of pulling it toward you. In every moment of your struggle, by looking at the goal and constantly referencing your position to it, you are affirming to yourself that you haven’t reached it. You only need to acknowledge the goal to yourself occasionally, using it as a rudder to keep you moving in the right direction.
Suppose you are trying to learn how to play a piece of music and you come from this new perspective. Your experience will be totally different than what we usually think of in terms of learning to play a musical instrument. In the old way, you are sure that you are not going to be happy or “successful” until you can play the piece of music flawlessly. Every wrong note you hit, every moment you spend struggling with the piece, is an affirmation that you have not reached your goal. If, however, your goal is learning to play the piece of music, then the feeling of struggle dissolves away. With each moment you spend putting effort into learning the piece, you are achieving your goal. An incorrect note is just part of learning how to play the correct note; it is not a judgment of your playing ability. In each moment you spend with the instrument, you are learning information and gaining energy that will work for you in other pieces of music. Your comprehension of music and the experience of learning it are expanding. All of this is happening with no sense of frustration or impatience. What more could you ask for from just a shift in perspective?
Four S words  Simplify, Small, Short and Slow
The author shows the interconnectedness between these words and the way these 4 words can be used to structure any work, be it following a fitness regimen / coding an algo / playing an instrument / cleaning up the house etc. Any task that is overwhelming at the first sight, might put us off and sometimes we tend to permanently shelve it. But once we simplify the goal, divide this simplified goal in small sections, do these sections in short durations at a slow pace, paradoxically , we get far more things done efficiently. This might sound all very obvious, but then it is relevant to ask oneself a question, “ When was the last time you did all the four things – Simplified a project, Took a small section, worked on it for a short time and more importantly slowly ? “ More than any other place, I can relate this to music. You can’t master a raaga without the 4 components mentioned. You have to simplify the goal of a 1.5 hr typical rendition of a raaga, Divide in to small sections ,Practice on one of the the small sections in a short interval , lets say 45 min, and more importantly practice it slowly. “ My Sitar guru was mentioning the other day about “Budhaditya Mukherjee”, a Sitar Maestro. In an interview, when asked about his practice regimen, he replied that he practiced 3 hrs in the morning and 3 hrs in the evening and more importantly , he does it EVERY SINGLE DAY. Unlike Ravi Shankar’s of the world who quote that they practice 15 hrs – 20 hrs a day, which seems to be practically impossible for a human being, Budhaditya Mukherjee’s ritual sounds more pragmatic and realistic. In the same interview, he also mentions that a combination of Simplify + Small + Short and Slow are quintessential to master a raaga.
Equanimity and DOC
Calmness and eventempered are the words that go along with equanimity. These are the characteristics that are desirable when we are working on something. However there is an evil beast called “Judgement” that sometimes jumps upon us.Judgment is inevitable in our lives, but it becomes pathological when we overdo it. Most of times we overdo it. We judge everything in life and most of it unconsciously. We imagine hypothetical scenarios and think about the possible outcomes and possible judgments that we would make / others would make in such scenarios. Its like running simulations to create parallel worlds and checking the parameter values. It is good in statistics as alternate worlds gives confidence intervals on parameters. It is detrimental when we are working on something as it robs us from the NOW. It happens to all of us. We are doing something, be it running/ reading / programming/ playing an instrument etc. Instead of just being in the present, we start judging it. We are thinking of the next activity that needs to be done / we think of some odd conversation with someone/ we imagine hypothetical situations etc. We try to engage ourselves unconsciously in things that have nothing to do with what we are CURRENTLY doing. The judgement gives rise to emotions and they stem from a sense that “this is right and that is wrong” or “this is good and that is bad.” Right and good make us happy while bad and wrong make us upset or sad. We feel that right and good are at least approaching “ideal,” while wrong and bad are moving away from it. We all want to be happy and have an ideal life, but what constitutes right and wrong is neither universal nor constant. The evaluations and judgments we make unconsciously in every second of our lives jumpstart our emotions and bring us so much anxiety and stress.What can be done about it ? Doing work with out judgement part is far more effective as the execution of the task would be that much more clinical.
The author suggests “meditation” as a means to get out of this everjudgmental mode of mind. He also offers another adjunct method,which he calls “ DOC – Do + Observe + Correct”. He gives anecdotes and experiences from his work as a Piano technician that bring out the importance of DOC to perform tasks efficiently. Using DOC repetitively for most of the tasks that we do in our daily lives, it is possible to remove judgement that clouds our thinking and execution. The C in the DOC refers more to evaluation and not judgment. Evaluation comes before the action of passing judgement. After evaluation, you skip the judgement part and then go over DOC cycle, as judgment has no value in DOC mindset.
Teach and Learn from Children
Time perception is an integral part of the difference between adults and children. In general, Children don’t seem to have a sense of where they are going in life. There is today and that’s it. They live in the present moment, but not really by their own choice; it’s just how they are. So, making them do any activity like learning to play an instrument / work on something which takes time and effort to master, is difficult as they don’t see a point in doing it. There is no instant gratification in learning math / learning an instrument. It is usually an activity which will involve a lot of struggle/failures etc. So,there is a paradox here. What’s frustrating as an adult, with regard to teaching them to stay in the present when they are engaged in something that requires perseverance, is that they can’t see the point. Why work at something that requires a longterm commitment, a perception of time outside the present moment? Children are always in the NOW and adults find it difficult to be in the NOW. Is there something that Parents and Children can learn and teach each other ? The author shares his experiences in dealing with her two daughters in this context.
A Piano technician’s job from the outside looks repetitive. This book provides a different perspective of the work and in doing so, provides valuable guidance for deliberate practice.
Posted at 12:41 AM in Books, Philosophy, Reflections  Permalink  Comments (1)  TrackBack (0)
Via Reuters:
Swiss bank UBS on Sunday increased the amount it said it had lost on rogue equity trades to $2.3 billion and alleged a trader concealed his risky deals by creating fictitious hedging positions in internal systems.UBS stunned markets on Thursday when it announced unauthorised trades had lost it some $2 billion. London trader Kweku Adoboli was charged on Friday with fraud and false accounting dating back to 2008.
"The loss resulted from unauthorised speculative trading in various S&P 500, DAX, and EuroStoxx index futures over the last three months," UBS said in a brief statement."The loss arising from this matter is $2.3 billion. As previously stated, no client positions were affected."
No Client positions affected! – complete B.S
Posted at 01:35 AM in Finance  Permalink  Comments (1)  TrackBack (0)
This book by J.R.Norris is a classic reference for Markov Chains. It is unlikely that any paper on Markov chains in the recent times does not mention this book somewhere in the paper. I will try to attempt to summarize the main chapters of the book in this post.
The book starts off by saying that Markov process are random processes that don’t retain memory. When the statespace is restricted to finite or countably many, these processes are called Markov Chains. The intro gives a great clarity for any reader in listing the kind of chains that would be dealt in the book. The following gives a basic flavor of Markov chains that are dealt in the introductory chapters:
Fig 1  Fig 2  Fig 3

Fig 1 is that of a finite Markov chain with three states 1,2,3. Since the movement of the random variable is restricted to discrete states, the chain is termed as discrete chain. In Fig 2, the random variable moves from 0 to 1 after waiting for a time T(exponential density). In Fig 3, the random variable moves from 0 to 1 and then 1 to 2 and so on, but the time that the variable has to wait to jump from one state to another is a poisson process with rate lambda. The latter two chains correspond to continuous Markov chain.
A very useful diagram which summarizes a lot of terminology is the following Markov chain.
The features of this Markov chain are the following :
All these terms become intuitively clear in the context of the above Markov chain. Clear definitions of these terms are presented in subsequent chapters. A few questions are posed for the above Markov chain so as to motivate the reader to think about ways to solve various kinds of problems that arise in this context of a Discrete Markov chains.
Chapter 1: DiscreteTime Markov Chains
This chapter forms one of the core chapters of the book and the important theorems on the following are stated and proved:
Firstly necessary terms are defined such as statespace, initial distribution, transition matrix, stochastic matrix. Using these objects, Markov chain is defined. The symbolic representation of the Markov chain essentially captures the memoryless property of the Markov chain. The chapter then introduces the method to calculate the probability of remaining in a particular state after n transitions. One of the obvious but dumb method that one can think of , is to keep multiplying the transition matrix n times and then use the initial position and this powered matrix to calculate the probability of remaining in the same state. There is a better method and this sections explains it by Linear algebra where powers of the matrix are easily calculated using diagonalization method. Basically one finds the Eigen values of the transition matrix, writes the power of matrix in terms of Eigen values and then calculates the probability of remaining in the same state after n transitions. This method is computationally efficient in finding the probability of remaining in the same state after n transitions. Once the Eigen vectors are found, any element in the transition matrix can be found by the following equation:
The chapter introduces the “class structure”, way to identify different states and categorize them. Two states are said to be “communicating” when there is positive probability to reach one from another. If a set of states are in such a way that each communicates with the other, then such a set of states are called “closed state”. A closed state is one from which there is no escape. “communicates” is actually an equivalence relation and set theory tells us that we can categorize the sets based on the equivalence relation. If there is only one single class in the chain then such a chain is called “irreducible”. If there is only a single state in an equivalence relation, such a state is called “absorbing state”. This means that if a chain enters this state, it remains there forever.
Hitting times and absorption probabilities are introduced subsequently. The first time I came across hitting times was in Marek Capinski’s book and subsequently in Shreve’s book. I could not appreciate those concepts at that time as much as I should have. Hitting times were defined and various distributions were computed, some in closed form, some using numerical analysis. I did not have the faintest clue on its application in real world, except in American options case. Years later when I was developing a pairs model, things became clear. Modeling the hitting times for the spread and estimating the parameters of the model made me realize the importance of the practical application of hitting times. This book has given me a lens to view the hitting times , i.e via Markov chain. The key takeaway from this section is that the vector of hitting time probabilities (mean hitting times) are found by solving a set of linear equations and this solution is the minimal nonnegative solution. The fact that the solution is a minimal nonnegative equation plays an important role in determining the solution to the system of equations involving hitting time probabilities(mean hitting times). This is showed using gambler’s ruin example and a simple birthdeath chain.
Strong Markov Property is introduced subsequently. To summarize in plain English, the intention behind using “strong” is as follows: Markov property is memoryless conditional on the chain taking a particular value, at a time interval m. To expand this property to different situations, one of the first generalizations that one can think of is ,”Is the memoryless property applicable to random times, instead of a fixed time,as given in the definition of the Markov chain?”. The answer to this question is yes, but in the context of specific time variables , called, stopping times. “Stopping time “ variables are those variables whose value is only dependent on the history of the path and not based on the future of the chain. In this context, first passage time + first hitting time are defined and are used to prove the “Strong Markov Property” theorem. The theorem states that Markov property is preserved at stopping times. Ok, so what if Markov property holds good ? Well, this property gives us the flexibility to compute the entire probability generating function for the hitting times. If the chain is in state(i), the probability of hitting 0 , let’s say, can be computed by a recurrence relation and then using the condition that the set of linear equations are minimal nonnegative solutions. However if one has to compute the entire set of probabilities at once, then the generating function approach works like a charm. Strong Markov property is used to come up with closed forms for the probability generating functions for gambler’s ruin and other examples. The reader can at once see the importance of the knowing generating function, as it is basically the signature of the random variable. You can calculate hitting probabilities starting from any state, you can compute the mean hitting time by simply finding the derivative of the generating function. So, the main takeaway from this section is that , “Strong Markov Property” helps in computing more general mathematical objects like generating functions which provide much more information about the hitting time probabilities.
A State in the Markov chain state space can be transient or recurrent based on the whether the probability of hitting the specific state as number of steps decreases to 0 or tends to 1. The section on transient and recurrence behavior is very important as it provides definitions for these terms, links these terms to class structure. Some of the main theorems mentioned in this section are
First passage time decomposition problem given as an exercise problem is very useful to get another perspective towards characterizing a recurrent state based on probability of return in the nth step. The chapter then moves on to using the following conditions for state classification:
The above conditions are applied to random walk in 1D, 2D and 3D space so that a reader can get an idea to use the same for other situations.
The chapter then shifts to explaining invariant distributions. Invariant distributions are those distributions specific to a transition matrix that have the property of reaching a stationary value. Meaning, if you start with some random distribution of initial states, after taking infinite steps, the Markov chain starts moving across states in such a way that the probability distribution of values it takes, mirrors the invariant distribution. Or in other words, the probability of moving from statei to statej becomes constant as the number of steps in the Markov chain increases. The calculation of invariant distribution becomes easier if one chooses to use linear algebra methods. If one computes the eigen vectors for the transpose of the transition matrix and pick the vector that corresponds to eigen value 1, then the vector is nothing but the invariant distribution of the Markov chain. The chapter then introduces the expected time spent between recurrent visits to a state. This expected time is very useful in classifying the state space based on the value. If the value is finite , then the state can be termed as “positive recurrent state” and if the value is infinite , then the state can be categorized as “null recurrent state”. Thus states can be classified in transient states, positive recurrent states and null recurrent states. Irreducible chain is a chain where each state is positive recurrent state. Computation of invariant measure also leads to easy calculation of expected return times.
Another related concept discussed is the periodicity. In an irreducible chain, either every state is periodic or every state is aperiodic. Basically a state is aperiodic if one can revisit the same state for all sufficiently large number of steps. Well it is easy to definite periodic state. A state is periodic if the chain revisits the same state in a set pattern. The relevance of periodicity is that , the invariant distribution exists for a irreducible and aperiodic chain. An extremely important aspect relevant to invariant distributions is discussed, i.e, time reversibility. One of the sufficient conditions for the existence of invariant measure is “detailed balance” condition. This condition is defined and applied to a few examples. The chapter ends with proving ergodic theorem, the pivotal achievement of A.A. Markov
In real world , no one hands us the transition matrix. One ought to estimate it in some manner. Given a Markov chain realization and a set of states, it is important to estimate the transition matrix. The chapter shows an approach that is typically used in most of the situations, i.e MLE method. With MLE method, one writes the likelihood function and use the Lagrangian method to compute the transition probabilities. The problem with this approach is that the transition probabilities are not consistent estimates for transient states. Hence this approach is modified so that the transition matrix estimates are consistent.
Chapter 2 : Continuous Time Markov Chains  I
Continuous Markov chain is more involved and hence the author has decided to split the concepts in to two chapters, first being written to equip the readers with the machinery to understand continuous time processes and another chapter using this machinery to discuss continuous Markov chains. This chapter discusses the machinery needed to understand Continuous time processes.
The first thing the chapter touches upon is the infinitesimal generator or the Q matrix. This generator is defined and various properties are explored. Once this is done, the crucial link between transition matrix and Q matrix is provided using the following theorem:
This connection between P and Q matrix gives the flexibility of creating a Markov process at fine grid points. A few examples are provided to illustrate the computation of transition matrix given the Q matrix. The chapter then moves on to giving the first introduction to continuoustime random processes. A very important result from measure theory says that Right continuous processes can be determined from its finitedimensional distributions. Such right continuous processes are chosen so that the probability can be summarized with the help of jump times and holding times. The most important thing to notice is the summation of a stochastic process using jump times and holding times. If you carefully look at the basic stochastic processes, what one assumes about holding times and jump times is the basis for the random process realization. For example in a poisson process ,the holding times are exponentially iids, i.e the process holding time before a jump is independent of the holding time after the jump. The chapter then introduces exponential distribution and shows the connection between exponential distribution and memoryless property. An important theorem stated in the context of this section is : Sum of independent exponential random variables is either certain to be finite or certain to be infinite based on the harmonic mean of the parameters of the exponential random variables. Another term introduced in this section is the “first explosion time”, that is defined as the supremum of all jump times. A process satisfying the requirement that if t > first explosion time, then it takes on the value infinity, is called “minimal” process. The minimal does not refer to the state of the process but to the interval of time over which the process is active.
The section moves on to defining poisson processes based on holding times and jump times. A simple way to construct a Poisson process is to simulate a sequence of exponential random variables with poisson rate lambda, and define the jump chain such that it jumps by 1 unit after every holding time. Three equivalent conditions are stated to identify any poisson process and the proof involves forward equations. Subsequently a few properties of Poisson chains are explored. A counting process is characterized by the distribution of increments. The increments might be independent OR they can be stationary increments where it is only dependent on the length of the interval. So the first question one must ask when applying a counting process to a problem is , “Does the context have independent increments or stationary increments ?”
The section then introduces Birth processes, which can be thought of a generalization of poisson process where the holding times are distributed with an exponential distribution but with different rates. Like in the case of Poisson process, the section proceeds in building the Q matrix, formulating the forward equation for computing transition probabilities from the Q matrix and finally defining Birth process in three equivalent ways.
The most interesting part of this chapter is the one on Jumping times and holding times where continuous Markov chains are formulated using the joint distribution of holding times and jump chain. It goes on define a Markov chain as follows :
It then goes on give three constructions to Markov chain. The one I liked was the one with a good analogy , i.e
Imagine a statespace I as a labyrinth of chambers and passages, each passage shut off by a single door which opens briefly from time to time to allow you go through in one direction only. Suppose the door giving access to chamber j from chamber I opens at the jump times of a Poisson process of rate q(i_j) and you take every change to move that you can, then you will perform a Markov chain with Q matrixQ
The above analogy then is used in taking a set of Poisson processes to construct a Markov chain. I found this kind of analogy very appealing as such things are always easy to understand and recall in various situations. Actually this whole construction of Markov processes using holding times and jump times itself is very interesting and makes tremendous sense in getting a general idea of the way random processes work. Sections on explosion times , forward and backward equations , Non minimal chains are to be read selectively as the difficult levels shoots up exponentially.
In author’s words, “There is an irreducible level of difficulty here.”
Chapter 3 : Continuous Time Markov Chains  II
With the necessary mathematical tools and framework covered in the previous chapter, the author introduces the continuous time Markov chain concepts in the same order as they were done for the discrete case. Basic terms are introduced like generator matrix, the initial distribution, transition matrix and transition probabilities. Class structure for the states is discussed where in the states are categorized based on their communication status. In this context, the system of equations needed to solve the hitting times and expected hitting times are introduced. Depending on the Q matrix, the system of equations help one determine the minimal solution for hitting time probabilities and expected hitting times of various states. Subsequently recurrent and transient states are defined in the context of continuous time processes and conditions for verifying the recurrence or transience of a state are given. They are basically in the integral form whereas they are in the summation form in Chapter 1 for the discrete Markov chains. Invariant distributions and ergodic law are discussed.
The basic funda that this chapter and the previous chapter for the discrete case teaches is that, one must analyze a Markov chain in the following manner, by posing questions to the problem at hand
The first 3 chapters comprise ~ 60% of the content of the book. I found chapter 4 to be very dense and overwhelming , as the author tries to extend the Markov chains and cram concepts relating to Martingales, Potential theory, Electrical networks and Brownian Motion, all in just 40 pages. The last chapter gives a flavor of applications of Markov chains by giving sample models in biology,queuing theory, resource management, Markov decision processes and MCMC.
The book gives an intuitive yet rigorous introduction to discrete and continuous Markov chains. The decomposition of a Markov chain in terms of holding times and jump chain to explain various concepts of a continuous Markov process is the highlight of this book.
Posted at 08:18 PM in Books, Math  Permalink  Comments (0)  TrackBack (0)
Via DelhiIBSEN festival 2010 :
This play is an adaptation of Henrik Ibsen’s “Lady from the sea” and is directed by Ila Arun.
The story is told in the folk form as the narrative through the balladeers, Bhopa and Bhopi, traditional storytellers from Rajasthan, who use the Phad, or a painted scroll to tell their stories. The play begins with the Bhopi telling her husband that she had discovered a new phad, called 'Ibsenji ka phad', in her late fatherinlaw’s trunk. She persuades her husband to use this in their next performance, instead of the old, wellworn tales of gods and rajas. He reluctantly agrees and they start their story:
Ibsen’s “The Lady from the Sea” becomes the story of Rampyari, (Ibsen’s Ellida Wangel) so named because she was adopted by a priest devoted to Baba Ramdev, a folk deity of Rajasthan. The priest brings her up as his own daughter. But as she grows up, there is a restlessness about her and a great obsession with water which even she cannot understand. She is drawn towards all waterbodies she can access in a desert land and fantasizes about the ocean and the sea, which she has seen only in her dreams.
One day, a handsome stranger comes to the temple, riding on a horse, just like the hero Baba Ramdev, searching for water for his thirsty horse. She leads him to the ‘bauri’ her favourite haunt, and finds herself fascinated by this stranger (also called Stranger by Ibsen). He tells her of faroff places, of the coast where he comes from, of the sea which he has sailed in a merchant ship. He is also a musician, playing the alogoja. Her fascination takes on a new turn as she is carried away on the wings of fantasy to a world, which has so far existed only in her imagination, the vast and limitless ocean.For her he is her Samander Khan, the spirit of the sea. One day he tells her that he will have to return to his hometown to board his ship. She begs him to take her with him on his horse so that she can catch a glimpse of the ocean which she has always seen in her dreams. Before bidding farewell, they seal their bond by putting their rings on a key ring which they fling far into the ocean, vowing fidelity to each other.
As time passes, she hears that the stranger had drowned in the ocean and she confines herself to a reclusive life, carrying only her obsession with water with her. Then another stranger enters her life, Soubhagmalji Vaidya, (Ibsen’s Dr. Wangel) who comes to the temple to pray for solace after the tragic loss of his wife. He is intrigued by this young girl and hesitantly proposes marriage to her, saying that he needs a companion for himself and a mother for his two young daughters. After her first reaction of shock because of the difference in age, she agrees, her one condition being that he make a home for her near water, since she is like a mermaid that cannot live away from water.
That is a promise he keeps and Rampyari’s love affair with water continues. But fate plays its hand again and the stranger appears, compelling her to leave her husband. After much agonizing, she rejects his suggestion and decides to remain with her husband who has been the perfect husband and protector.But for the Bhopa and Bhopi, there are several unfamiliar references along with unexplained behavioral differences from a different culture. So the story deviates from the original script as the two discuss the strange world of the Wangels, their daughters and the others who stray into their lives. Somewhere in the course of the narrative, the 'sutradhars' or storytellers themselves become the characters they play, almost as if the spirit world and the real world had merged into one. But fact is stranger than fiction, they say.
The dazzling display of Rajasthani dance with all it bright colors and costumes , rendered in “Phad” tradition is a treat to watch!.
Posted at 01:52 AM in Play  Permalink  Comments (0)  TrackBack (0)
Real Statistics is not primarily about the Mathematics which underlies it:
common sense and scientific judgment are more important.
But, there is no excuse for not using the right Mathematics when it is available. David Williams
Posted at 02:09 AM in Reflections  Permalink  Comments (0)  TrackBack (0)
I am hooked on to “Markov Chains” these days and they seem to fascinate me as the applications are in almost every field that I look at. I happened to go over Page Rank algo. From a Markov chain’s perspective, the algo can be summarized in 2 steps
Step 1 : Represent a random surfer’s movement as a Markov chain
Here N is the total number of pages indexed by Google, Q is a transition matrix( irreducible, a periodic) that captures the transition probability of moving from one page to another , p is the page rank of the pages on the internet and alpha is adjustment factor to take in to consideration the inherent importance of a page.
Step 2 : Solve the Markov chain equation of Step 1 to compute the page rank. This is solved using iterative methods from Linear algebra (SVD)
This algo needs to be tweaked day in day out as the transition probability matrix keeps changing every second. As soon as any webpage (i) puts out an external link to any of the page (j), the probability corresponding to the origin and destination page Q_{ij} in the transition matrix changes.
Combine these 2 steps with more than a decade of computational learning about each element in the transition matrix , and you get a multibillion dollar company called “Google”
Posted at 08:33 PM in Firms, Math, Statistics  Permalink  Comments (0)  TrackBack (0)
Rory Sutherland’s commencement speech
Posted at 01:59 AM in Reflections, Talks  Permalink  Comments (0)  TrackBack (0)
“ Furious activity is no substitute for understanding.”
 Williams, H.H
Posted at 01:49 PM in Reflections  Permalink  Comments (0)  TrackBack (0)
Sun  Mon  Tue  Wed  Thu  Fri  Sat 

1  2  3  4  5  
6  7  8  9  10  11  12 
13  14  15  16  17  18  19 
20  21  22  23  24  25  26 
27  28  29  30  31 