My 1st Million At 33 – yes, you can do it too

A site to share my tips, tools, and humble thoughts on the journey to wealth

Legal disclaimer     Place your ad here    
  • Categories

  • Archives

  • Spam Blocked

  • Sponsors

  • Archive for the 'Futuristics' Category

    MtGox offline due to the biggest theft in Bitcoin

    Posted by Frugal on 25th February 2014

    First you can’t withdraw your cash, and then you can’t trade, and then the site is offline completely. It’s sad & ridiculous that a company of this size can have the theft going on for years (90+% asset stolen), and doesn’t notice it. I think the brand of MtGox is permanently damaged.

    Bitcoin unfortunately will be setback along with the biggest exchange at MtGox. What’s most amazing about the Bitcoin is not how it goes from cents to over a thousand dollars (and now at about 500ish), but rather how much computational power goes into all of the SHA-256 hashing in the network. I’ve been wanting to buy an ASIC hardware for mining, but the spec of the hardware can never catch up with the increases in difficulty. The difficulty & computational hardware is increasing at the exponential speed. The current network hash rate is now at 27000 Tera Hash per second. I don’t know how long it can increase exponentially, sooner or later, it will hit the global semiconductor fab capacity. After that, it can only increase linearly.

    Since there are not a lot of ASIC players in the space. I fear that when a few players can control such big percentage of computational power, down the road, there may be another MtGox.

    If you want to read further about MtGox situation, here is the unofficial crisis strategy draft. The part 1 strategy using arbitrage is the same as insider trading. If MtGox knows that it’s not going bankrupt for sure due to a buyout deal, and is buying up low price Bitcoin from their customers to erase their debt, there won’t be any reputation or trust left at MtGox.

    Is Bitcoin dead? Probably not. Is MtGox dead? Probably. Should you make any online payment using Bitcoin? NEVER. Bitcoin payment is the antithesis of using credit cards. Once it’s paid, it’s gone and untraceable. The only thing standing between the non-delivery of goods or services is the fragile business trust, and that trust is only as good as the monetary difference how much the business can generate in the long term versus how much the business can get/cheat right now.

    Posted in Business, Futuristics | Comments Off

    Join me for a new invention?

    Posted by Frugal on 18th December 2007

    Are there any good mechanical engineers out there to join me for a new invention?

    Video-taping my children has become a pain in the arm. I have thought about this a couple of times. If a person can mount the camcorder on the shoulder (probably not on the head due to a weird feeling), one can easily record videos with very few physical efforts. Of course, the biggest problem is that when you move and walk, the video quality will be terrible. But I think I may know how to solve that.

    The answer is similar to those anti-noise earphones, where it cancels high noise. I’ve learned that trick in my senior year in college. That’s very old technology. Definitely decades ago.

    Anyone out there, care to brain storm a little?

    Posted in Announcement, Futuristics | 4 Comments »

    Future Trend: Intelligent Software and Artificial Intelligence

    Posted by Frugal on 30th November 2006

    This article is less money-related, but more technology-related (or job-related for that matter). It concerns all of us. Will intelligent computers overtake the world in our lifetime, or our children’s lifetime? Many futurists will tell you YES with great excitement and/or extremely deep concerns for the social ramifications. Me, on the other hand, I will tell you that they are way too early in their “visions” (in my personal opinions). I do believe that software will get “smarter” over time, but reaching a comparable state of human intelligence is probably at least one hundred years away if not a couple of hundreds. Please bear with me on my lengthy discourse. I do hope that it will be interesting to you.

    Do you understand how computers work, and how softwares are written? Lack of knowledge gives you fear and creates ignorance. I want to ask, how many of those futurists know anything in details about artificial intelligence or AI? AI was one of my academic pursuit and sparetime contemplation. When I was 13, I wrote an unbeatable tic-tac-toe game on an Apple II computer. When I was 15, I laughed at the most rudimentary form of AI of robotic mice going through a laberynth, and solved a more difficult problem of creating a random laberynth without any unreachable space inside. At that time, I have realized the biggest hurdle of AI (or rather AI based upon expert systems), which is that all intelligent softwares or non-intelligent softwares must be created by the human software coder. The software inherently is a rule-based recipe. Whether the software can beat the best chess player in the world or not, it is still rule-based (or algorithm-based if you are more mathematically inclined), and must be created by a human being. If the software in the computer has any of the apparent intelligence, it is still granted by its creator. Yes, computer can analyze tons of data much faster than human can, and/or calculate millions of forward scenarios unfathomable by humans (because of lack of human processing time), but it is human that gives computer the rules to do so. And if computers can create and deduce their own rules, those rules are still within the same rule-creating framework which itself is another rule-based systems. Sorry to inform you this, but computers just canNOT think outside of the box, figuratively speaking.

    Later on, in college, I came across technologies behind speech recognition and optical character recognition. I won’t go into details of the speech recognition which is based upon Hidden Markov Models (HMM) which can successfully model human speech through a probabilistic model. On the optical character recognition (OCR), it is really the advancement in the processing speed of computers that made a 1974 technology called Neural Network possible again.

    When I first learned about Neural Network as a senior in college on my own, I was so excited. Because I knew that I’ve found the thing that I have always been looking for, the true way of how human intelligence is assimilated and processed. The way how neural network works is it simply models how 100 billions of neuronal cells inside human brains work: it is a network of nodes with different connection strength to each node, and the strength of the connections are repeatedly trained through stimuli to become either stronger or weaker connections. Human brains compute through biochemical reactions between neurons in an analog (non-digital) way. However, such computations are done in a massively parallel way involving billions of neuronal cells everyday. Through stimuli, we form patterns. Through patterns, we form rules, etc.

    Now, if we model the same computations on computers, do you know how much worse it will be? First of all, computers deal with digital numbers, and so each analog connection strength now needs to be a digital (floating point) number. Now, depending on how many neural cells we model, let’s say N, the number of connections between all cells are N times N. And then the each computer or rather each CPU can only process things serially even though each CPU is really really fast. Because nowadays, computers are quite fast, we can do some limited application such as optical character recognition (OCR) in a reasonable amount of time (mostly via backpropagation neural network). But to model 100 billions of neurons in a human brain, and all computing in parallel??? Given the current computing power, you need LOTS and LOTS of computers & electric power too.

    What are the current most promising technologies that may duplicate the computational power done by human brains?

    1. Ex-Caltech professor Carver Mead has done analog implementation of neural network in silicon chip. But it appears to be not very successful since I can’t google out much more information on this thread.
    2. Quatum computation: If successful, such ways of computation will be power efficient and enable much faster computers than current semiconductor technologies can provide, based on silicon.
    3. There is an actual system built with probably hundreds of thousands of computers to mimic a human brain. I can’t find the link now, but the computational power is in the same order of a single human brain. But

    In any case, despite Moore’s law on computation, I don’t think we are anytime close to a machine intelligence era at all. After studying both the biology of brain and the computational aspect of the brain, I truly marvelled at how great our brains are at doing the “wet” (biological) computations.

    Just a side note for you to become a smarter person. Do you know how to become smarter? Smartness is always associated with the ability to change. Make sure you’re willing to change when things don’t go your way. When the biological neural synapses are no longer elastic (or not being able to change when you get older), your learning ability starts to drop. Do you know why there is only a fine line between genius and insanity as said by Oscar Levant? Obviously a genius has lots of brain activities if not great learning ability. For any control system to have more elastic/wider-ranged parameters, the end result is probably faster learning/convergence, but it also comes at a cost of less stability. Obvious when the stability is lost, it would be called as insanity. Got that? By the way, the above is only my thinking. Didn’t really read those anywhere, but I’m sure somebody must have written or said something similar.

    By the way, if one day we do have such computational power, that alone will not create human-like intelligence. Human brain learns because there is a need for survival. The learning process is goal/survival driven. Without a motive for machine, machine will NOT continually learn and improve. To have a motive, there must be inputs and outputs. The outputs go to the external world, in the attempt to get the best desirable inputs back into the human brain or machine brain. And of course, as the creator of the intelligent machine, the creator must carefully define what are the most desirable state for the machine, but the thinkable machine will certainly figure out that survival for itself should rank very high.

    I will stop here, since I’m really going off tangent.

    Posted in Futuristics | 5 Comments »

    Future Trend: What is Moore’s Law Doing for Consumers

    Posted by Frugal on 18th October 2006

    I’m starting a Future Trend series to express my views for the future technological trends, and how it may relate to your investment. This is the very first one. I hope you will enjoy it. Please let me know what you think via comments.

    With Moore’s law of doubling transistor counts every 18 to 24 months, integrated circuits are facing an unprecedent level of integration and increase in both processing and communication power that is not possible before (or just 24 months ago). While in the past, the Moore’s law has primarily benefited consumers in terms of more speedy processors in the computer, the current trend is now onto multi-processor architecture simply because of two primary reasons:

    1. Power density as the processing clock frequency increases is posing a seirous technological challenge.
    2. Everyday software applications cannot benefit from higher speed, but rather from multi-processors (especially with dual processors).

    With Intel losing market shares to AMD in the last two years, partly because Intel was late both in 64-bit desktop/server processor, the CPU market is undergoing a commoditization process in which price, not brand, is what matters more. Being wary of AMD competition, Intel has divested its communication interest, and refocused its effort on processors. Intel has beaten AMD this time around in introducing quad core processors, shortly arriving in 2006.

    However, doubling or quadrupling the same CPU may not be doing much for consumers. With so many transistors increasing exponentially on a single chip, the amount of IP (intellectual property) that can be designed by engineers can barely keep up. Instead of doubling or quadrupling the same thing over and over again, the more important industry trend is towards more integration of different functions for the goal of reducing overall bill of material (BOM) cost. After integration, a single chip performs the same amount of functions which were achieved by 2 or more chips, and using less PCB (printed circuit board) area, and less overall cost. In the endless drive towards lower cost, certainly the company that is able to deliver the highest degree of integration would win at the end. IC design companies on this caliber are few. I would only mention BRCM, MRVL who are eating or ate away market shares once belonged to INTC, STM, CNXT (for disclosure purpose, I own/owned at least one of the previous five securities at different times). Both companies however are under stock option back-dating investigation and are delinquent in their SEC filing.

    The integration of different chips is allowing box/device or hardware makers to provide many more functions at a lower cost and shrinking geometry. The best example is the current available cell phones, many of which you can use as MP3 players, digital camera, browsing web, and checking emails for cellphone PDAs. The low cost examples are DVD-players, DVD-recorders for PVR. I’ve seen rock bottom prices under $30 for DVD players, and I bought my current Lite-On DVD recorder at Costco for only $90. It replaced my VCR, and I used my DVD recorder for archiving my mini-DV tapes.

    As IC companies cannabalizing each other’s market share and overall revenues, the survivors may be very few. Despite all these, IC revenues each year still increases, due to newer applications and higher market penetration. However, gradually the picture for the losers won’t be pretty. If you do own individual companies in this space, I advise you to pick your stock carefully. Make sure the company that you picked is on the right trend of more integration of mixed signal (analog & digital) chips. If you are not so sure, you should own sector ETF such as SMH instead. If you are confident about which companies are good and which ones are bad, you could consider hedge your own bet with partial short position in the worse companies.

    More comments on INTC vs AMD. If you look at what Intel (INTC) has done, divesting communication business, while AMD acquiring ATYT for graphics business, you should realize that INTC is really on a dangerous track especially knowing that integration is the game in IC. Given that INTC graphics chip has always been at the low end, AMD acquiring ATYT may become the next biggest strategic threat to INTC. Another good reason to buy SMH, instead of individual stocks.

    As they say, QQQQ leads the general market, while SMH leads QQQQ. It is important to watch SMH simply because IC semiconductor companies are usually the leaders of the technology.

    Posted in Futuristics | 6 Comments »