Monday, July 31, 2006

Derivatives Primer

[
Extra-article comment:
Hello Friends. I am publishing this draft because I want to give to the loyal following a hint of the glut of articles for the blog I am working on, I haven't finished any because I am researching on many different subjects, I have been busier, and the articles are though, those that require extra effort and that I am proud of. Since this is a work in progress, I am not announcing it, but allow you to see it. Expect potentially substantial changes before I delete this notice
]

The Options Strategies Thread.

Part #1: A primer on Derivatives pricing.

Before describing the workings of the Angelwarewest-Asatru Skald-Knapper Tech-Chicagrafo money pump, it has become clear that the effort wouldn't provide much value to the audience if I don't explain the highly technical fundamentals of Stock Options and Derivatives first. This may be basic for some people, if that is the case, please, try anyway to read this article to post a comment that may be useful for the rest of the audience.

Words of caution are necessary: The Stock Market attracts the very brighest of the people able to understand the sophisticated concepts and issues of financial analysis; it is very tough already to be "smarter" than the market just picking with a modicum of consistency which stocks will go up and down; with options, where you multiply the risks by an order of magnitude and the technicalities are perhaps daunting to even bright investors, the chances of losing everything by getting into too deep waters are very high. It worries me deeply that I have friends and acquaintances in the message boards who regularly trade with options without thorough understanding of derivatives concepts. That is like playing with nitroglicerin. I guess that beginning with the fundamentals I will facilitate understanding of the dangers about hedging or speculating with options. If you think you know a lot about this subject, I reiterate the invitation to keep reading. If something surprises you, please, leave a comment. I myself discovered quite a number of misconceptions I had when I undertook the task of seriously studying options.


First, Derivatives:

I guess that those who trade options want to make money off them. Just as it is important to understand when a company may be over/under valued, it is important to be able to assess the right price of derivatives. In the case of options, it is also very important to understand their "wasting asset" nature: In the case of shares, if you are right about your evaluation of a company, you can go long/short as long as it takes for the market to agree with you; but with options, you not just have to be right, the market has to agree with you quickly enough, otherwise your options will be accumulating losses until they are either worthless if you went long on them, or costing you a fortune of you went short. Then, it is also very important to be able to accurately understand how the options time-decay. As if that wasn't enough complication already, in the case of shares, there is only one price, only one bid/ask pair; but in the case of options there are three more dimensions: Strike price, expiration, and role (whether to go long or short, they are not symmetrical), forcing you to compare among different competing views on the same underlying.

Replicating Portfolio

A Stock Option Contract is a "proxy" for the underlying, it is an instrument that allows you to control 100 shares. If you can do the same that an option allows you to with shares and cash, then the price of that setup (portfolio) is the price of the option.

There is a grave misconception rampant: The pricing on a derivative does not has to do with the expectations of the underlying to go up or not; it only has to do with how expensive it is to replicate a portfolio that simulates the gain curve of the derivative. Let me explain it this way: If AMD is going to go up almost certainly, with almost no chances of going down, that will not affect the price of the options, the price will be affected only if the share price itself, the underlying, goes up as a result of the expectations. That is, the expectations may only indirectly affect the derivatives prices if they are priced into the underlying.

To replicate a portfolio that will behave like the derivative you must take into account the cheapest interest rate, which is the risk-free, or the government bonds because goverments can not declare bankruptcy.

Without going further, a dummy example: Suppose a stock price currently traded at $100 will be priced either $125 (with probability 65%) in six months, or $80 (35% prob); and the stock will not have any other prices but $125 or $80. Suppose that the "risk-free" interest rate is 5% yearly.

Let's price a $100 strike price six months to expiration european-style call option on those shares.

No options-pricing calculator that I know of will tell us a price for that option because the shares are weird in the sense that they will have only two possible prices in two years, but we still can find a methodology to price them. Let's analyze the value of the option at expiry:
  • If the stock moves to $125, the value at expiration of the option will be $25.
  • If it goes to $80, its value will be 0.
Suppose that it is possible to replicate those cases with a portfolio of shares and cash. The price of the option would be the cost of replicating it with shares and cash. So, let's find how many shares and cash replicate that option:

X is the amount of shares, and Y is the amount of cash.

From the first case, that the price goes to $125, we have:

(1)X*$125 + Y*SQRT(1.05) = $25 (the cash will grow 5% per year, or SQRT(1.05) in six months)

From the second case,

(2) X*$80 + Y*SQRT(1.05) = $0.

Substract (2) from (1) and

(3) X*$45 = $25, thus X = 5/9 shares. And substituting, Y = -5/9*$80/SQRT(1.05) = -$400/(9*SQRT(1.05)) ~= -$400/(9*1.0247) ~= -$43.37

That means that going long on 5/9 shares and borrowing at risk-free $43.37 I will have a portfolio that will give the exact returns that the call option would give, thus, the call option price should be ~ 5/9*$100 - $43.37 ~ $12.19. Notice that it is irrelevant how probable the up or down case are.

Another example: A put option, strike $200, one year expiry:
Case it goes up to $125, the value of the put would be $75, thus (1) X*$125 + Y*1.05 = $75;
Case it goes down to $80 (2) X*$80 + Y*1.05 = $120
X*$45 = -$45 <==> X = -1, Y = $200/1.05 ~= $190.48:

Shorting one share and putting $190.48 to grow at 5% per year the portfolio gives the same returns on both cases, the portfolio has an initial value of $190.48 - $100 = $90.48, that is the appropriate price of the put option.

Risk-Neutral approach:

Do you see that as long as the replicating portfolio gives the same returns as the option in every case, then the probabilities of every case don't matter? If they don't matter, we can assign "probabilities" that suit our calculations better. Let's say that the expected return of one share is the risk-free rate, that is, $100 initially will become $105, thus, being p the "probability" of the stock going to $125, $125*p + (1 - p)*$80 = $105 <==> 45*p = 25 <==> p = 5/9. "Plugging" the probabilities into the put option cases, E(Pp) = 5/9*$75 + 4/9*$120 = $(375 + 480)/9 = $95, with the risk-free discount of the investment, dividing by 1.05, we have $90.48, the same price (!).

We have seen so far that

1) The derivatives are agnostic with regards to the expectations on the underlying,
2) the risk-free rate is essential to price the derivatives,
3) we may assign fictitious probabilities for the cases just assuming that the underlying will appreciate according to the risk-free rate, and those probabilities allow to price the options.

The next step to make the pricing suitable for real life options is to assign (fictitious) probabilities to the infinite number of cases for the evolution of a stock price. The intelligent reader may construct a procedure, although laboriously, to price options in cases in which the next price is a bifurcation: for every two contiguous final prices, you can price the option for the previous step, and use those values to price the preceding step, until the current period, that's the "binomial" method.

But in real life we don't have bifurcations but jumps of random magnitude. For practical purposes, we can assume infinite end prices. In that scenario, one case is a sequence of jumps and bumps. Just as it happens in many scenarios in which you have binomials and want to transcend to the continous analysis, the binomials transform into "Normals" (Gaussian Bells). Here, the number of "paths" or cases that end in a specific price may be modelled as following a "Normal Random Variable" distribution, but of course, not normal in the price of the stock, because there can not be negative prices, but in the logarithm of the relative price variation.

A stock traded initially at $10 that ends up at $100, appreciated 10 times, the (natural) logarithm of this relative variation would be 1, if it crashed to $0.1, 1/100 of the original price, -10, if it remained flat, 0. Taking the logarithm of the relative variation also conveys the true nature of investment: exponential.

To complete a description of infinite cases that we will take into account to price options, we may need a fifth ingredient (the others are Stock Price, Strike Price, Time to Expiration, Risk-Free rate), the volatility of the variations of the stock price

With those five ingredients, Fischer Black and Myron Scholes cooked "The Pricing of Options and Corporate Liabilities" in the excellent year of 1973 (because it brought "Yours Truly" to this world!), just a month (May, of which I am harvest of) after the Chicago Board Options Exchange opened. Both epoch-defining moments eventually enabled the whole market for options to be as liquid and developed as it is today.

The complication with the Black-Scholes model is that it requires differential calculus to be understood. As a matter of fact, if you don't have solid understanding of that subject, you would do better staying away from options because you won't be able to detect when the theoretical price is not adequate, won't be able to compare among different strike prices and expirations, and worst of all, you will be utterly incompetent to choose the right expiration times for the options that suit best your strategy.

If you object that the market will price the options for you, then you are very wrong. Not even with a stock so profusely traded such as AMD there isn't enough liquidity in the options market, thus the spreads are high and the prices are susceptible to distortion: The number of investors who regularly trade with options is insignificant compared with plain shares, and the options trader has not only a company to choose, but literally hundreds of options, so the already small liquidity is spread hundreds of times thinner. I don't want to deviate from the main subject, but I have made a bit of a living lately scavenging for minor distortions here and there in options pricing; thus I have first hand knowledge of how easily ignorant option players are losing their money, but not only that, I know that I am making mistakes due to incomplete understanding that the real "Pro"s are taking advantage of.

The next article will get into Black-Scholes to finish the Part #1

P.S.: This approach to explain options pricing was imitated from the book "Investment Mathematics" by Andrew Adams, Philip Booth, David Bowie, and Della Freeth; "Wiley Finance". I don't dare recommend this book, because I don't know nothing about books that cover this subject; while I was looking for a way to explain things about options, I just took the first book on the subject that crossed my path, which was this, and replicated the approach because it made sense to me. I learned in a very dis-advisable way: Googling every concept until I understood enough. If possible, don't make the same mistake.

Thursday, July 27, 2006

Amazing Week

Hello Audience!

Ever since I posted "Sell AMD" I've had very intense days, in the following weeks you will hint for yourselves how much, because I will be posting the many events here.

First, it was the period of pre-earnings for Intel in which I had a significant bearish position, which coincided with the period in which I had to receive your criticism for my change of stance, its suddenness, etc, and to reply to that responsibly.

Then, Intel reported, and the following day, I made this mistake in (supposedly) "AMD gains revenue share", and had the embarrassing bad luck of having been corrected thanks to A1 in the blog, and still, unaware of the correction, being fooled by my own calculations to cover my written calls at a loss of $100 each only to lose even more money thanks to AMD results. Remember that I couldn't just sell my shares, my investment strategy, although demonstrably bearish on AMD, includes going long in shares, but of course, this "last opportunity" of going long without the cushion of written calls made me lose.

AMD reported, I had to trade furiously on Friday to fully implement my strategy, in particular buying LEAPS puts, and while doing that, SLAM! the rumor of ATI broke.

Then, the whole weekend I had to do a *lot* of homework to finish setting up the "Angelwarewest - Knapper Tech - Asatru Skald - Chicagrafo" money pump at the cheapest cost, which involved quite a lot of research. I also shared some of my results in the "Ireland" message board, and discussed with our friends there quite a lot about the mechanism, I will detail the results of my research in the blog as soon as I get to organize all the material in public publication format. On top of that, further analysis about Intel's Everest mountain of inventory,
and AMD's results was meritory.

Sunday night, I found out casually looking at my balance that I was in margin call!, I hope to amuse you with the anecdote as soon as I an put it down.

Monday morning, I had to do a number of trades and listen very attentively to the conference call about the merger, and I have been studying the purchase of ATI. I think that the audience that I have garnered thanks to the technology insights will not be dissapointed with the ideas that I am profiling for publication.

From Tuesday on, I have been catching on with my life, and today, I finally got a chance to update the blog. Keep in tune!

Thursday, July 20, 2006

Tendencies

Those who read "Sell AMD" know that I was speaking of tendencies that in my opinion are very clearly defined on the bearish sentiment, but there isn't any part on the article with specific timings, and I forgot to include elements which may revert the trend. Although I sustain the whole article because AMD doesn't have a plan, leaving the company in grave jeopardy, new plans may make AMD a good investment again, just that I don't see that likely having watched closely AMD's management for the last months.

But major events such as Intel's reports and AMD's reports take precedence over tendencies.

Also, you should know that I couldn't "just sell" my shares, because I wrote covered calls and didn't want to expose myself to a violent temporary reversal of the trend.

After I saw Intel results, I was able to gauge how good AMD results were going to be for the quarter, thus I decided to go plain long with my shares for the earnings report, covered my written calls, and hope for the best.

Tomorrow, or perhaps a bit later, I will keep implementing the "Asatru Skald money pump" with a bit more of emphasis on protective puts. I also have to describe the details of the "Asatru Skald three-cycle volatility-ripple money pump".

Suffices to say that this is not a "faith crisis", but a rude awakening to the reasons why AMD's stock price went to less than a half.

AMD gains 0.9% processor revenue market share

Update: I made the grave mistake of not accounting for the 14 weeks; thanks to "A1"'s comment I awoke to the mistake, although very late.
Originally, it seemed that AMD had won almost a percent. My apologies. This incorrect analysis misled me to cover my written calls under the assumption that AMD gained terrain. I hope that next time, dear readers, I will be able to make use of your help before it is too late for losing money or being publicly embarrassed for so long. "A1": No, AMD didn't lose revenue share either and thanks for the tip.

According to Intel's report
Microprocessor revenues (Digital Enterprise Group + Mobility Group) are 3.338 G$ and 1.958 G$, or 5.3 G$.

AMD's revenues are 1.22 G$ in 14 weeks, for a 13 week equivalent of 1.133 G$

The combined revenues (13 week adjusted) are 6.43

That means that the revenue market shares are 82.4% and 17.6% now.

Since Intel comes from Q1 (DEG + MG == 3.892 G$ + 2.347 G$ == 6.239 G$, it experienced a decline in processors of (62-53)/62 or 14.5% (by the way, Intel ceased to receive almost one billion dollars, 4/5 of total AMD revenues).

With these numbers, we can see that in Q1 the revenue market share was Intel processors 6239 to 1332 AMD, or 6239/7571 to 1332/7571; 82.4% to 17.6%.

Surprisingly, the proportion remained the same.

Wednesday, July 19, 2006

Pre - Conference Call

This quarter AMD will present about $0.29 EPS, substantially more than the $0.22 average the analysts expect.

But I am not sure that the surprise will matter. Last quarter, AMD also beat expectations, very well between guidance despite Intel's furious dumping, and yet the stock price slid 15% the following days, until the Dell announcement.

I have been thinking why AMD sells off after good concrete news, perhaps it is that the investors are holding counting on the news, the news come, but there is no reaction, and then they sell. For investors, AMD hasn't had an exit point, a period to reduce the interest on AMD for more than four months now, after significant events such as the earnings report, it is natural to not want to wait any longer.

Then, there is the issue of Intel promising to take over the world with Woodcrest/Conroe/Merom, the evident interest in Woodcrest processors, and most Wall Street analysts wondering whether the gamble of betting the whole company in the premiums of multiprocessor servers and the volumes of scarce profits that Woodcrest/Conroe/Merom and Pentium 4 dumping will leave for AMD uniprocessors.

I don't expect "an answer" to neither Conroe nor Woodcrest, if at all, to Merom with the new mobile architecture expected to debut one year from now.

Going back to the $0.29 EPS, further declines next quarter, and perhaps a "flat to slightly down" Q4 means less than $1.25 per year. At the current interest rate, the P/E multiplier should be about 17 for low risk stock market investments, think about it, $1.25 EPS at 17x P/E multiplier are $21.25 per share. But AMD is not a sure investment. So far, AMD has been growing and that's why it has been receiving far larger multipliers. But, will it keep growing?

Yes, AMD has much greater production capacity. Sad that it lost all pricing power in every uniprocessor segment.

Monday, July 17, 2006

Sell AMD: My best advice

[Updated 2 times]

Time to sell, folks.

[Update]
A brief summary:

Ever since the Athlon, AMD has had the best product. All of today AMD plans assumed that they will keep having the best product. But Intel comes with Conroe, and there is a split between Mobile/Desktop/uniprocessor server and multiprocessor server best processors. So AMD plans are not valid and a contraction comes.
[/Update]

A not so brief summary:

Intel's new products will outcompete dramatically every single processor AMD offering. Extra multi processor server profits for AMD due to the excellent growth in that segment can not compensate the obliteration of the comparatively huge profit volumes of single processor systems missing. With such cut in revenues and especially profits, AMD's plans to finance capacity expansion to bring more economies of scale will be stopped, and probably also its ability to come with improved products and catch up to Intel. On top of that, it seems that indeed the market for computers is commoditized and weak; with the aggravated factor of Vista probably coming making emphasis in 32 bit computing.

This competitiveness stumble was completely un-anticipated by AMD's management, which was even dismissive of the threat this represents. Thus, either management made a great mistake, or misled on purpose, so it doesn't deserve further faith.

The jeopardy is very real, because the much deeper underlying reality of Intel's advantages due to economies of scale now have a timeframe to impose over AMD's recent market/revenue/mind share gains until AMD's bankruptcy.

These risks are already very high, which would demand very low P/E multipliers, but worse yet, all the money earned will be reinvested in capacity and technology, in a feedback loop of risk.

As if the previous wasn't enough, the economy is weak, the Federal Reserve is fighting against inflation, energy prices record high and the promotion to nuclear power of Iran and North Korea may precipitate two major new conflicts.

Reasons in Detail:

Current AMD competitive advantages: AMD lags in everything but:

* Processor Interconnect
* AMD64
* The integrated memory controller's help with virtualization
* Partners
* Brand Equity
* And sSOI is merely a tie, at best, to Intel's Silicon Process

What has AMD done with these advantages?

The processor interconnect enables multi processor systems, that we have already stated that are not enough to sustain everything else. Allows for coprocessors, a market that has been slow to emerge, and allows for even graphics 3d engine coprocessors, that not only don't exist, but that merely today the talk about them has began. Thus, the coprocessors are a perdurable competitive advantage, at least until Intel launches CSI, but that will not help the company to solve its current stumble. Scuba diving, I can be happy if my friends promise me an air tank to keep submerged for another two hours, but if I my air ran out, and I have to wait underwater ten more minutes for the two hours tank, I would be dead by the time it arrives. AMD also needs to breathe.

AMD64's technology is mesmerizing. It took the mess that Intel made of x86 and made it elegant. Its implementation is even better. Today, every AMD64 processor is faster at 64 bits than the same program compiled for 32 bits, as it should, without adding appreciable cost to the processor, which is no small feat, opposed to the possibly microcode-interpreted Intel EM64T that is not even supported in flagship (?) products such as Yonah "Core Duo" and Sossaman servers (!). This should have been a cash cow for AMD, but the AMD64 marketing has been preposterous.

AMD agrees with me on its importance, otherwise it wouldn't have implemented it so superbly in the processors from the Sempron on, moreover, according to the latest AMD SEC 10-Q filing:

We must achieve further market acceptance of our 64-bit technology, AMD64, or we will be materially adversely affected.
Our AMD Opteron processors are criticalto our strategy[.] Similarly, our AMD Turion 64 processors are critical[.] Increasing market acceptance of these processors, our AMD Athlon 64 processors for desktops and the AMD64 technology on which they are based is subject to risks and uncertainties including:
• the continued support of operating system and application program providers for our 64-bit instruction set, including timely development of 64-bit software applications and applications that can take advantage of the functionality of our dual-core processors;
• our ability to produce these processors [...] timely[,] in the volume and with the performance and feature set required [...]
• the availability, performance and feature set of motherboards, memory and chipsets designed for these processors, in the volume and with the performance and feature set required by our customers.
If we are unable to achieve further market acceptance of our AMD64 technology, we would be materially adversely affected.
AMD may help or hinder a lot the market to get enthusiastic about AMD64. What has AMD done about it?

I keep saying it: AMD is losing its efforts trying to convince Microsoft to support AMD64. The real way to have Microsoft on board on AMD64 is to threaten it with Open Source and Linux competitiveness. But AMD is not doing the all-out effort that it requires. So far, practical and pervasive 64 bits has been the sole preserve of AMD, thus every effort AMD would have expent on 64 bit market development would have been reverted in larger markets.

It makes sense, even today, for AMD to identify the projects in which the most 64 bit market development could be obtained by the smallest budgets. Free and Open Source Software, where it is trivial to go 64 bits by just recompiling in most cases, is an obvious choice. It also makes sense to fund projects so that they first provide AMD64 functionality and then 32 bits. Does AMD want to increase adoption of Turion 64 bits? Fund Orinoco with one million dollars to make Linux support all Turion64 wireless chipsets. There aren't that many chipsets, thus perhaps not even 1 M$ is needed; and then, every Linux user will feel extremely enthusiastic about making its default the 64 bits, Linux users will revolt against the limited Core Duo 32 bits, will demand more compatibility, and a whole virtuous cycle of AMD64 acceptance would occur. By the time Intel catches up to AMD, AMD may have biased 64 bits in laptops to AMD64 by extensions that improve power savings, preserving competitive advantages. A project such as this may kickstart a whole ecosystem of AMD64 platforms with the same economics as Centrino, but with variety!

What is happening in reality? that the Linux Turion 64 owner is left on its own. That is stupid.

Beyond Open Source, it is clear that AMD64's marketing efforts have failed miserably, because the whole world seems to think that it is only something related to servers and 64 bit computing, which hides the important benefits of the widened register file.

AMD missed so many opportunities to strike it rich with AMD64 that it gave Intel the chance to finally catch up. And by not putting pressure on Microsoft (through Free/Open Source Software AMD64 support) didn't force a commitment to 64 bits from them, now, Microsoft must be backpedalling with everything it's got about the 64 bits issue: Since Conroes run slower in 64 bits than 32 bits, Vista Premium requiring 64 bits will make Vista to be perceived even more as a computer retardant than what it already is. Why would they bother to port zillions of drivers? And not that they could, the real problem is the lacklustre enthusiasm for doing the Vista 64 bit drivers from hardware and software providers. In the end, it will be much simpler for everyone to patch everything with page address extensions and keep doing 32 bit bussines as usual. That it will suck big time? yeah, but who cares if people buy it?, in three years the complexity of Vista "supporting" 8 GB through PAE will be not a defect, but a feature, or the excuse for yet another Service Pack. I already mentioned this thesis here, I would say that the prediction has been materializing.

In conclusion, AMD doesn't help with the adoption of AMD64, and the market doesn't adopt it. AMD only offers it.

AMD's virtualization technology is another no-small-feat advantage, but even harder to market. I had quite a number of annoyances trying to figure out what is exactly the support for virtualization that AM2 processors offer. I couldn't get to its specification, there were dead links in amd.com, so I still don't know much more about it than earlier in the year when I wrote "Pacífica Vs. Vanderpool"; but being Pacífica much better than Intel's Vanderpool, it made my heart to sink to read the following about Xen, a Free/Open Source Software Project, perhaps the second most important virtualization solutions provider in the world, right after VMWare:
1.4. Does Xen support Microsoft Windows?

The paravirtualized approach we use to get such high performance has not been usable directly for Windows to date. However Xen 3.0 added Intel VT-x support to enable the running of unmodified guest operating systems, including Windows XP & 2003 Server, using hardware virtualization technology. We are working on implementing support for the equivalent AMD Pacifica technology.
So, Xen leverages Vanderpool to run Windows but can't do the same with Pacífica. Ladies and Gentlemen, there is no justification for this screw-up. I wonder if the same difficulty I experience to get hard specification data from AMD to evaluate its technologies is shared by Xen developers. It is absolutely stupid to expend so many millions of dollars into developing, implementing, and announcing to the world the Pacífica virtualization support in AM2 if you fail so miserably at getting the ultra-cheap ultra-important Open Source project to support your technology and thus guarantee its crucial acceptance. It is slightly off topic, but the reason why I could learn so much about the Itanium architecture is because Intel did a wonderful job of making available all of its technical documentation; it also financed Linux, the FSF, and almost everybody doing GPL to the death to get on the IA-64 ship. I very well remember that the first protos of Itanium (Merced), the processor as a board full of circuits, ended up in Linux developer organizations to make sure they could go forward with development. There is no equivalent in AMD's efforts, actually, AMD hinders development with its inappropriate web site.

In the latest 10-Q SEC filing already linked, AMD also mentions the following:
Intel exerts substantial influence[.] Because of its dominant position[,] Intel has been able to control x86 [...] standards and dictate the type of products the microprocessor market requires of Intel’s competitors. Intel also dominates the [...] chipsets, graphics chips, motherboards and other components[.] As a result, OEMs that purchase microprocessors for computer systems are highly dependent on Intel, less innovative on their own and, to a large extent, are distributors of Intel technology. Additionally, Intel is able to drive de facto standards for x86 microprocessors that could cause us and other companies to have delayed access to such standards.
I totally agree: Any tech. company that wants to be innovative can't just be another Intel pushover. That's why they it's so exciting all the possibilities of nVidia, ATI, Broadcom, Marvell, Sun Microsystems, IBM, SOITEC, etc. all joining their combined creativity, market courage, money and channels to AMD's offerings. But perhaps these companies are just the mice having a party when the cat is asleep. With the utter uncompetitiveness of AMD microprocessors once Intel rolls out debugged Conroes, perhaps a shitty Intel integrated video (de)accelerator will cream an nVidia engine, at least in price. The times for a second comming of malignant un-innovative companies such as Dell have arrived. All of a sudden an XPS looks sexy. All it takes is to be a faithful Intel lacay to be "successful" as our textbook example, Dell, shows. Dell doing AMD multiprocessor systems? forget it. Empty promises, Emtpy announcements. The heat is on for the "back to school" season already, if Dell wanted to announce AMD consumer products, what would it be waiting for? Christmas? This is Lucy once again coming back to its true nature.

It is definitely cooler to say "I bought an AMD", but it won't for long, unless Torrenza and 4x4 really catch fire, which I seriously doubt. There are enourmous obstacles to develop software to take advantage of multiprocessing that will not go away soon. Look at what happens in Gaming: An FX-57 has quite acceptable performance, almost as good as an FX-60 that has TWO cores, because it is marginally faster (2.8 GHz vs. the 2.6 GHz of the FX-60). Gaming makes marginal use of the extra core, like 30%. AMD is asking, to justify the extra expense and power consumption, that applications all of a sudden make use of not 1.3 cores, but 3.1, and worse yet, for a technology that only they have (remember AMD64 and AMD-V). Of course that it will provide some enthusiasm, but limited.

This whole post assumes that Intel is willing to do what I said it wouldn't, that is, a real price war. What has changed is that Intel now commands the upper ground. Not just the Conroe Extreme, but the second, and even the third member of the family, the 2MB L2 cache, beat the FX-62. The assumption changes the whole game: Every percent that Intel regains in market share means at the very least, three and a half percents less for AMD. The limited production of Conroes is more than enough to wash out AMD premiums, and the problem only gets worse as we look into the future.

I had high expectations about the K8L, the quad cores, etc. Until I realized that there was a seriously weak issue: The minuscule L3 cache. Why? Because AMD doesn't have the daring to try a multi chip single package processor, in which 4MB or 8MB L3 cache can be put off-chip as the L2 cache in the Pentium-Pro.

Earlier in the year, I thought that AMD was sandbagging with the 65nm schedule, I won't make the same mistake with the processor roadmaps. AMD hasn't crossed the 3GHz, no hurrying of 65nm processors.

I had faith in AMD's management, I thought they had an answer we didn't know about Conroe. But it has been five months since the first serious talk about Conroe, and still no answer, worse, denial.

Leaving everything hanging on the thread of Intel not being able to debug Conroe is not responsible.

I earlier thought that AMD was the leader of a world-changing initiative: To convert the processor monopoly into an ecosystem of innovation. But although that is what they are trying, and had their chances, but the people at AMD are just humans that were caught by surprise.

Thursday, July 13, 2006

A bit of defense for SOI

I have received intelligent criticism about my enthusiasm for AMD doing the extra expenses it takes to include SOI in their silicon manufacturing, to which I gave a reply, but a new article allowed me to go deeper into the subject:

In an interview, "Turn Down the Heat ... Please" which I obtained thanks to an article in "The Register", "IBM: Cell-like CPU yields 10-20 per cent", Ed Sperling asked Tom Reeves, VP of semiconductor and technology services at IBM, what would be the next big thing in chip technology, and Mr. Reeves answered:

Through the ’70s and early ’80s, bipolars went up to 100 watts. We had water-cooling systems, but you needed something new. Then we started with CMOS, [...] Now, 20 years later, we’ve got 100 to 120 watt chips again. Power is everything. The efforts we’re taking to get leakage power down for cell phones or a base station or a Cisco switch are enormous. If you look at a chip in a base station or a switch, they’re 40 watts, and there are a lot of them. The total wattage gets up to 5,000 or 10,000 [!!]. So the major focus now is not on Moore’s Law [!!] and how you get the next density step. We’ll get that. How you get the next performance step is harder work than it’s been, too. But the most important issue is how you manage power. Leakage power at the most advanced lithography is very challenging. And with active power, can you cool the gain? College kids were hanging some gaming systems out their dorm windows to cool them down.
Silicon on Insulator may not help at all to overclock a µ-processor, which is a nice feature that the gaming market appreciates (the materials used for SOI are good heat insulators as well, thus they obstruct the efforts to extract the extra heat that the overclocker forces the processor to generate over volting and over clocking it), but still, it helps with the very important leakage.

The simplistic explanation is this: The transistor behaves a bit as a capacitor, to switch, it needs to get rid of the charge it accumulated. The lower the charge, the lower the energy leakage associated with switching, and the lower the capacitance the lower the charge. SOI helps precisely about lowering the capacitance. According to the wikipedia, SOI helps 30% with the transistor leakage, and 15% with switching speed (of course, it takes less time to get rid of a lower charge at the same current level (because the voltage is the same and the resistance too)).

We shouldn't take Mr. Reeves opinion as that of an interested party, what he points out is perfectly clear and true. Power, it is Power the issue of these times. The usage of the more expensive SOI technology, and harder to market slow clock speeds, is a sign of a pervasive attitude at AMD: They focus on solving the real technology issues. Just like I said in "65nm Is Just Intel Marketing", Intel doesn't bother, at Intel, the basic question is this: "Can we market technology defficient products?", if the answer is positive, Intel won't do power efficient processors, nor good architectural features. Rather, what Intel attempts is to sell products with marketing-amiable catchphrases such as "Gigahertz" (remember that Pentium 4s only cared about the clock speed, although they couldn't do as much real processing as an Athlon at half the speed, while consuming a lot of power!), "Hyperthreading", which was so poorly implemented that rather than providing a huge speed boost, as it should, provides a decline. 65nm products that so far are not even at par with current AMD products. "Dual Cores" that are really multi chip packages, and it now spins the weaknesses of not having neither the integrated memory controller nor a processor interconnect as a great advantage that "allows" to put gigantic caches and a stupid memory architecture flexibility, because it's something you can't do much with.

It is the same approach everywhere. For server processors: "Can we market processors for servers although they can't scale past the four cores because they don't have any processor interconnect?", "Can we fool our customers with Netbursts while they want to buy Woodcrests?", yeah, let's name the Dempseys Xeon 5000 series and the Woodcrests Xeon 5100. A great question is how long can Intel succeed with lies. It has taken decades to merely crack Intel's monopoly, which was constructed with lies and deception. How long would the monopoly remain given that Intel insists in blatant lies?

There is no wonder that at Intel they don't expend the extra doing SOI, they prefer to expend expend extra in marketing to convince people that SOI is an unnecessary expense.

Wednesday, July 12, 2006

On the playing field of companies:

Imagine that this is the WWF (or the extinct WCW, or the WWE, or whichever you fancy). And we are about to see a match. Two guys get into the ring. One is The Undertaker, and the other is Some blonde from the De-Generation X. The match begins, and there are thousands of things affecting the match and affected by the match.

Of course, there are some things affecting the match that are completely internal to the two wrestlers (size of heart, size of the lungs, dosage of steroids taken, amount of previous training and workout), some of them are internal to them, but bear interaction with the outside world (mouth to shout, arms and legs to use). Then are the real external ones, like fighting style, number of moves known, et cetera. Some are in the environment, like metal chairs, or the poles of the ring (and how masterfull is each fighter using them). And ¡hey! ¡if you thought there were just two contenders, you are wrong! there is Yokozuna, who "happened" to be in the arena, and this guy Hulk Hogan who refuses to let go and passes the chair to one of the guys. Then there are other stakeholders, like girlfriends and friends of the wrestlers... But it does not stop there, there is the regulator (the referee, McMahon), then there is the paying (per view) customers, and the guys who pay tickets and buy magazines and action figures. And the bookies. And then there are the horoscopes of the guys...... You get the idea....

So ¿What are the things that affect a firm, any firm?

Well, first there are some internal thing like the Core Beliefs, Mission, Vision, Strategy and what not. Then there is Recruiting, Compensation plans, Evaluation Policies, Financial Position, Production methods and capacity and what not, that are internal, but bear some relation with the external world

Then we have the Product's Technical Merits (real and perceived), Product Quality (conformance and customer defined), Marketing, Public Relations, Brand Image, Industry relationships...

But wait, it does not end there, some players may be interested in the outcome, producers of complementary products, for example (chipsets im the µProcesor industry, hotdogs in wrestling).

The market as well is interested. You, as a father, do not want to buy for your children the action figure of the guy that looses all matches (unless the child specifically asks for it); and you do not want to buy a Betamax either.

But wait, it gets better: ¿Are you polluting? ¿Creating jobs? ¿Saving the Wales? ¿Paying your taxes? ¿Abusing your Dominant position? ¿Using Predatory Pricing? ¿Are your products made by 12 Year old children in sweatshops in Asia? ¿Are you showing nipples in prime-time in the USA (in Europe is not such a big deal)? These are called non market issues.

But it gets even better, if you are in the Electricity or Telecommunications Business, you may have Solar System Issues (solar flares, solar storms and solar spots come to mind), if you are in Satellites, add to that asteroids....

But it gets even better than ever before: You can have Galactic / Universal issues: If you are a chip maker, ¡Cosmic rays can wreak havoc with you! ¿You do not believe me? Well, check the link

(This is the first link I post because this is so obscure that googling for this is hard. I assume that the audience can use google for some of the things / people / theories / facts I mention , if not, this is not the right blog for you)

In the coming weeks, we will analyze one by one all those factors, in our attempt to understand what is happening in the Intel vs. AMD struggle.

¡Salud!

Howling2929

Sunday, July 09, 2006

AMD64 practical: XP 64 + VMWare Server

There are many inconsistencies and issues with 64 bits in the Windows world, drivers in particular, that prevent people from enjoying the whole capacity of their computers. But there is a practical solution.

First, why would anyone bother to install an AMD64 Operating System?: Because unfortunately the only way to have AMD64 functionality requires the Operating System to work in "Long" mode, that is, native 64 bits.

Now, most people would say that it is irrelevant to have AMD64 functionality if you don't have more than 4 GB in your computer, and I would keep repeating myself saying that such an opinion is totally wrong, because AMD64 is inherently faster:

  • The same applications should run faster with just a recompilation to 64 bits because AMD64 offers twice the number of General Purpose Registers, 16, instead of 8. That means that the processor can work simultaneously with twice the number of temporary values. Furthermore, in x86, among the 8 GPRs are the stack pointer and the frame pointer ("Base Pointer" in x86 Intel's nomenclature), therefore the applications only really have 6 GPRs. But in AMD64 they have 14, more than twice the effective number (learn more about registers at this footnote).
    This feature has many positive consequences:

    • It eases the memory traffic (which is good because there are latencies associated with accessing even the L1 cache) because the values are already there, in registers that the compiler or programer can administer at compile time with more intelligence than the silicon dispatcher or the scratchpad register manager on the fly.

    • It allows more silicon optimizations. Since the values are in registers, they are readily available to all the speculative, reordering, branch prediction, etc., optimizations. If those values were in memory, even in L1 cache, these optimizations couldn't even keep track of them, much less actually factor them out of the critical execution path.

    • It is easier for both the compiler and the programmer to write optimal software. Consider the case of a calculator that can only hold one value: Most of your time will be expent in writing aside the temporary values that you need to remember, we already explained this in the first item, but also, if you could just leave the temporary values in the calculator and recall them when needed, not only your life would be easier, but less error prone too. Exactly the same happens with compilers and programers and a large register file.

  • If you have a fairly powerful computer, like in the range of 2 GB or more of RAM, chances are that you are already hitting the 4 GB wall, because of the virtual memory. The problem is that Windows reserves 2Gb for itself (1Gb if you tweak the system) and therefore the applications fight for the remaining top of 2 GB (3 GB). When windows "feels" that it is running out of memory, what happens is that it gets rid of disk caching space and swaped out memory blocks, that is, it increases its number of direct reads (and writes) to the filesystem instead of the much faster secondary memory accesses (hard disk accesses) to the page file.

  • What if you need data sets larger than 2 GB (3 GB), or simply want to give Windows the luxury of 2 GB of memory disk cache? then, either the Operating System or your applications, or both, will have to deal with the non-trivial complexities of "Page Address Extensions". If you were old enough in the early nineties, you would remember about the horrible nightmares of configuring DOS's low, high, extended and expanded memories; the 4 GB barrier is the dreaded 640 KB redux. As it is today, there aren't too many incentives to put more than 4 gigs in a compy, even though the addition of memory could really speed things up.

  • Finally, if you do integer calculations larger than 32 bits, either the usage of the floating point unit to do them, or the ripple-chaining of twin 32 bit operations are from two to eight times slower.

A 32 bit processor is a 32 bit processor, the patches to support 64 bit features are patches: buggy, error prone, difficult to understand things that are the last recourse. Computer technology is something that should be approached looking at the future, not towing the past to the present.

If you are smart you must be wondering, but aren't there some tradeoffs?, a "price to be paid" to have these benefits? Some people have asked me that perhaps the programs are larger (less memory efficient) if you use 64 bits. The answer is that practically no. AMD64 applications may choose among using 32 bit integers, or 32 bit pointers by default or not. If they default to 32 bits, there is no space overhead other than the truly negligible overhead of passing some 64 bit parameters in the stack and return addresses of system calls.

The AMD64 encoding is almost the same as the 32 bit x86, which is tight. And the "trick" of extending the architecturally visible register file gets accomplished by devoting the single-byte encodings of register INCs and DECs to byte prefixes, that is, the opcodes from 0x40 to 0x4F become the famous REX prefixes that work as mode (64bits/default) and the register class (old/new) selectors for the up to three registers that can participate in an instruction (Register, Base, and Index), with an overhead of one byte, in only some occasions. It is worth noting that it is still possible to do "INCs" and "DECs" of registers using the MOD R/M version (that is, instead of the single byte 0x40 or "INC EAX", the two bytes enconding, 0xFFC0, remains available).

Thus there is practically zero impact in the coding efficiency for AMD64 applications.

The problem with AMD64 is that Intel never wanted it to succeed, for obvious reasons, thus never really tried to make it work seamlessly in their processors, and Microsoft felt too complacent with XP and 32 bits, so, the brunt of making this technology mainstream fell upon puny AMD and unintimidating Free Software projects, with unforgivable marketing incompetence on the part of AMD who failed to market the advantages I mentioned in this article and instead allowed the general public perception to be the misleading "64-bits is only relevant for monsters with more than 4Gb, not my gaming machine".

But the Free Software world wasn't incompetent. Linux and other Free Software projects leveraged their intrinsic advantage of being easy to recompile to recompile themselves in AMD64 obtaining all the benefits expected. That made Microsoft move its ass to get on board before the ship departed, and with Microsoft providing support, Intel didn't have any other choice but to also get on board and took Yamhill (EM64T) out of the closet.

The problem is that neither Intel nor Microsoft really took the x86-64 thing seriously, the former got delayed with XP-64, even XP for AMD64 being relatively trivial to do, and has been failing miserably at guaranteeing acceptance by not porting to 64 bits all the device drivers that helped Windows XP to be so popular.

About Intel, it is commonly accepted that programs run slower in EM64T than at 32 bits; contradicting what I pointed out earlier in the article; but the reason may be very simple: Intel may have implemented EM64T in microcode, thus 64 bit instructions may be sort of "emulated", and naturally much slower. Thanks to the 64 bit advantages, a bit of compensation is obtained and the downstep in performance is not so pronounced.

Without Microsof and Intel seriously supporting the 64 bits technologies, the rest of the market has followed suit, and the adoption has been lacklustre.

Thus, the AMD processor owner is on its own about how to reap the advantages of the 64 bits in a practical way.

The solution to this problem is simple, practical, free of charge, and with a number of positive side effects: Microsoft is giving a 120 days free trial of XP 64 bit, thus you have the practical way to test the waters before making commitments; then leverage yourself with Free and Open Source software. In Windows, important applications such as Java, 7-zip, POV-Ray, Daemon Tools, and even a free (of charge) antivirus, such as Avast! Home edition as can be confirmed here come with versions for 64-bits. The rule of thumb is to leverage Free and Open Source Software as much as possible, because the FOSS has the advantages mentioned above to easily port the applications to AMD64.

To fill the gap that still remains, you can put a 32 bit computer inside your empowered 64 bit computer installing the free (of charge) VMWare Server and preparing a 32 bit Windows XP Virtual computer. It will only take some shared hard disk space and whenever you need the 32 bit computer, its operation will draw only 320MB of RAM memory (or whatever you choose).

I have a 3800 X2 1 GB DDR2 800 in Dual Channel, on top of which the virtual computers run, and my virtual XP-32 machine runs faster than my Turion. Furthermore, I haven't experienced lack of responsiveness in my host XP-64 (the real computer) when my virtual machine is doing heavy stuff such as installing windows updates, and this was before I changed the configuration, the virtual was configured to have a virtual hard disk mapped to a 6 GB file in my SATA2 (300 MB/s) hard disk formatted in NTFS.

So, if I don't want to deal with the hassle of finding a 64 bit driver, I just fire up my 32-bit XP virtual computer and deal with whatever needs it at 32 bits in the virtual computer.

In case you don't know, the virtual computer runs as an application in your host, not even being aware that it is a virtual computer. The guest (virtual) computer has a virtual network adapter that in reality bridges to the host's (to the real computer's network connection), but from the rest of the network, it looks as any other computer. The guest may also use the hardware installed in the host, such as Hard Disks, optical disks, even USBs. Since both the host and guest computers are the same hardware, you don't need to dedicate too much hard disk space to the guest, just provide it with enough space for the operating system and essential applications, the rest, for instance, space to do DVD transcoding, may come from the host through the virtual network, sharing the hard disk space of the host.

Using a virtual computer has another advantages. For instance, I wouldn't install the "Gordian Knot Codec Pack" to transcode DVDs to XViD and DivX in any "serious" computer, because it really messes up with the operating system configuration due to all the codecs and tape-'n-bubble-gum applications glued toghether, but if it is a virtual computer specifically prepared for that, then there is no problem, if it corrupts the whole operating system, well, tough luck, I will expend 5 minutes restoring the last snapshot ;-) Another use is to install testing software, or even software trials. In case you are wondering, the video transcoding operations ran so fast in the virtual computer that I never bothered to benchmark how they would have run in the host. One of these days I will set up an XP-64 bit guest to check whether the applications that I use to transcode run flawlessly in XP-64 and perhaps install them one by one in the host.

All these experiments were so successful, that I opened my wallet to buy another monitor so that I could maximize the virtual computer window in the secondary monitor, and dedicated a (P)ATA-133 hard drive to the guests, to have "portable virtual machines" and to increase my total hard disk drive bandwidth. Now I get the feeling of having powerful computers side by side: One at 64 bits, the important, and the other(s) at 32 bits for quick and dirty stuff.

Currently, I fire up a guest with Knoppix every time I need Linux, and I am preparing a Linux-from-scratch to make it exactly the way I would like it, but this project will take a while at an average of 5 minutes of work per day ;-)

Since I am already tuned into the virtualization wave, I am enthusiastic about the virtualization features that AM2 brings and the further developments of Pacifica/Presidio, which are already vastly superior to Intel's Vanderpool because of the Integrated Memory Controller which allows to virtualize memory and I/O (my article on the subject). Once Woodcrest shows up for real and induces a slide in the price in AMD processors, I will buy a dual socket dual core Opteron compy specifically to run virtualization on them and to centralize all of my hardware.

x86 and AMD64 registers: According to their names and binary code ordering, they are: Accumulator (AX), Counter (CX), Data (DX), Base (BX), Stack Pointer (SP), Base Pointer (BP), Source Index (SI) and Destination Index (DI). Initally these names were significant because every one of these registers had unique roles, but with the cleaning of the Instruction Set in the transition to 32 bits they became almost uniform in properties, although the ESP and EBP (the distinction to refer to the 16 bit part and 32 bit part is to put the "E" prefi x for 32 bits) naturally continue to have very defined and not flexible roles.

Friday, July 07, 2006

Shorts & Bears, isn't AMD warning you?

Take a look at http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543~110430,00.html: We know that revenues are down 9% sequentially. You are happy so far, right? Your dream come true: AMD warning again after one year of beating and pounding the Wall Street estimations. But keep reading:

Record AMD Opteron™ processor sales were driven by continued strong demand for single-, dual- and multi-socket configurations for servers and workstations. Sales of entry-level and mainstream mobile and desktop processors were down.

That is the company speaking in official capacity. Let's go item per item:

  • Record Opteron sales: The highest profit product line from AMD is having record sales...
  • Sales of medium or high end mobile or desktop processsors?: NO MENTION
So AMD is informing us that the whole earnings cake is smaller, but made with a new recipe that has a much better filling than what we are used to.

Now ask yourselves, how many Semprons sales are needed to make the same profit as an Opteron 186? Could it be that the total revenues are down 9% but the total profits up? Did AMD crossed the absurly high (for its standards, anyway) level of 60% Gross Profit Margin? And think about it again: Only 9% decline in revenues with Intel doing a nasty all-out dumping campaign including 50% in Pentium Ds, their fairly high end products? The time is ticking, not to face Conroe, but to face AMD at 65nm... Did you receive your Woodcrest already? When do you expect it? Hello? it is the Server space we are talking about...

Thursday, July 06, 2006

On Bounded Rationality and the Nature of Reality:

     In Chicagrafo's Blog, there is a statement of purpose: «My bit of help with the irrationality».

     ¿Can Chicagrafo really help?

     Well, ¡the answer is no! ... And neither can I.

     You see, this gentleman called Herbert Simon put forward a model of human behavior called "Bounded Rationality". In very crude terms it means that humans are rational only in a limited subset of their interactions with their environments, usually the one which is closely aligned with their experience, training, and formation, and as they depart from that comfort zone, they behave more emotionally / irrationally.

     Of course, some of you may say that, since no two humans share exactly the same background, if we assemble a collection of humans whose backgrounds cover most of the relevant field, we may get a rational explanation of the whole field. But this method has two flaws:

     * ¿Are you sure that these humans will be able to communicate in a rational fashion, when they have such different backgrounds? Maybe, since their backgrounds are different (as per the assumptions) there will be miscommunication and frustration thwarting the effort (try to put a psychologist to talk with an engineer, throw in two marketing majors and a lawyer to boot, and see what happens). This is one of the banes of Inter-Disciplinary teams.

     * The second problem is with reality itself. You see, I think that reality is a REALLY BIG picture, but not a painting, which is continuous in nature, nor a gigzaw puzzle, which is not continuous, but at least the parts fit each other perfectly, but a huge mosaic, like the ones you find in some churches. Some people are very close to it (we call those specialists) and can only see some individual bits and pieces; some are very far away from it (we call these generalists) and can see the big picture but have no clue on how the pieces interact, but at least the can shout to the specialists what to do (the question about if that is effective or not, is a different story); some people are very far away from it, but with a telescope that they can not rotate (McKinsey & Company call them "T-Man"), therefore, thay can see the big picture, but are experts in one (or a few) area(s); and finaly some of them are blessed by being far away, having a telescope, and being able to rotate it 360° (those may be called geniuses). Therefore, we are trying to interpret a thing were the pieces do not fit together nicely, and sometimes, our point of view does not help...

     ¿Is Chicagrafo a genius? No. He is a very intelligent man (whom I respect, but sometimes do not agree with), but no genius. I will let himself to choose his category.

     ¿Am I a genius? No, I am not ... I am a guy far away from the mosaic, with a telescope that almost does not move, and I am refocusing and pushing the sucker (for instance reading a lot, doing marketing courses while working as a SysAdmin in a telecomms company, doing an MBA) to amplify the telescope's range of movement, maybe 1° today, maybe 3° tomorrow, and I will keep refocusing and pushing the sucker until I die or get Alzheimer's......

     The point is that, at the end of the day, we will not be able to help with the irrationality ... But, if you keep an open mind and "Listen Without Prejudice" you may get some of your bases covered, and some interesting info out of the reading.

¡Salud!

Howling2929