Just One More Layer of Indirection
(Trying to achieve stable orbit with sufficient architecture)

2 x 42″ (permalink)

January 27th, 2012

96px != 1inch (permalink)

August 22nd, 2010

    I find it ridiculous that Firefox will be redefining CSS Units. The plan is to redefine the physical CSS dimensions so that 1in = 96px, and add new physical dimension called “mozmm”, for the physical millimeter. Let me make this clear: Currently CSS defines “1mm” to mean one physical mm, Firefox whats to change “1mm” to mean 3.779px, and define “1mozmm” to mean one physical millimeter.

    I hope it is not just me that asks “Why!?!?!?”.

    This is how I see it:

    • If web developers are using physical units wrong, then let their pages render wrong
    • Even so, if you want to render badly designed pages, then just lie about the pixels/inch.
    Posted in Economy, Personal, Rants, Technology | No Comments »

Gamma Correction (permalink)

February 25th, 2010

    I have long desired an awesome colour wheel from which to select the colours for my palette.  I am not looking for a beautiful human-selected palette; I am looking for a purely mathematically based palette that looks great too.  I have noticed that the yellows and browns, in software palettes, are a tiny slice of any mechanically generated palette.

    Let’s begin with my problem.  I am trying to transition from green to yellow by adding red:

    The problem

    I can not be certain what you see, but I see green, green, green, green, maybe lighter green, and then a transition to yellow.  This is a poor start to making a good colour wheel.  It seemed to me that the high-intensity green was overwhelming the small amounts of red.  I wanted some way to boost the low reds so I would get a better range of lime greens.  I knew intensity was a “logarithmic” scale, and I was hoping compensating for the “logarithmic error” would help with this problem.

    I was wandering the web when I found Eric Brasseur needed to compensate for gamma error in picture scaling.   Could his solution be what I was looking for?

    This time I used dithering to mix my colours and make the transition from green to yellow.  If I can see past the graininess of the colour swabs, I can see if this transition was better.

    The test

    I definitely get more yellows, and the uniform green has been removed.

    Using Eric Brasseur’s gamma error correction to blend colours, I get:

    The solution


    I should point out that this is not actually Eric’s gamma correction, just that he was the one that brought it to my attention. Apparently, the gamma curve for consumer monitors is well defined. See Wikipedia on the sRGB standard

    Now the three all together:

    The solution - together

    From top to bottom we have the gamma corrected blend, the dither blend, and the original “linear” blend.

    The following has blends of eqi-intensity.  Bright yellow, as shown in the above palettes, has double the brightness of the green, so it is not a very good test for gamma error correction, and does not belong here.

    Comparison RGB

    Click on image to view at full size. The reduced size shown blends the dithering improperly.

    Again, from top to bottom, we have the gamma corrected blend, the dither blend, and the original “linear” blend.

    If you do not see the top row and middle row as the same, you should calibrate the gamma of your monitor.  I find these two images much better for calibrating gamma than the black-and-white lines (or RGB lines) used in most other gamma calibration images.

    Finally, a mathematically based colour wheel:

    Colour Wheel

    Posted in Math, Physics | No Comments »

3D TV is NOT Bad For You (permalink)

February 11th, 2010

    Some guy named Mark Pesce wants to prevent me from seeing Avatar in my home, in 3D. He claims heavy use of 3D monitors will result in “binocular dysphoria”, and television manufactures should perform health and safety testing.

    Well, I say “Bah! Give me Avatar, and forget the health testing!”.

    Using the Wikipedia article on depth perception, he claims we humans use 9 different depth perception cues we use to determine depth. Upon reading the Wikipedia I found only two that are relevant to 3D screens and monitors. They are called “Accommodation” and “Motion Parallax”.

    Accommodation is where you can feel the depth at which you focus at, and you can use that to determine depth. I believe humans are not really sensitive to this feedback, and only able to use it when items are really close to the eye. For example, have you looked at a repeating pattern, such as a screen, or wall paper, or fence rails, and seen it closer, or farther, then it really was? Accommodation did not help you.

    What did help you determine proper depth was motion parallax. This is mentioned in the Wikipedia article, but not emphasized as I would like. I believe we humans are very sensitive to motion parallax. Combined with the fact we are fidgety creatures, we can see depth with only one lens because that lens is always getting a new perspective. When we watch 3D screens we do not get motion parallax feedback, and the association between inner ear and motion parallax (or lack of it) must be ignored by the brain.

    But I believe the human mind is much more powerful than Mr Pesce gives it credit for. I may concede that old people may not be able to transition between real-life 3D and projected 3D, but I have no doubt younger minds will be able to switch, even if this switch will have to be conscious.

    In summary: I want to watch Avatar in 3D.

    Posted in Personal, Physics, Technology, Updates | No Comments »

Missing Step Zero (permalink)

December 9th, 2009

    Why do instructions seem to miss the step before the first? I will call this missing step “Missing Step Zero”, if you will.

    Maybe I am just bad at Googling. Maybe I have a problem with directions. Or, maybe I have a non-standard setup, but the missing step zero has plagued me several times in my life.

    Now, I am too forgetful to remember all the Missing Step Zeros I had to hunt down but her are two:

    “How do I write a file in Oracle’s PL/SQL?”

    But none of them work!

    After hours of investigation I found the missing step zero:

    Make sure sysdba grants you permission to execute the UTL_FILE package


    I wish I read *this* first

    “How do I turn on WCF logging?”

    And there are hosts of other sites with various logging options. But none of them work!

    After a few days, and help from a friend, I found the sneaky step zero:

    Only the app.config file in the executable subproject is used to configure logging. All other subproject config files are ignored.

    Seriously? How about generating an error when an app.config file is not going to be used?

    Posted in Coding | No Comments »

Are Ad Servers Bogging Down the Web? (permalink)

November 30th, 2009

    Slashdot brings up a point I complain about: Ad servers are slowing down the web.

    I do not use web applications because they are slow. I do not know what people do to pass the time when they wait for each page to load. Using web mail, and adding an attachment makes you feel like you wasted precious time.

    The web is mostly slow because of server latency. Especially “waiting for …” whatever ad server has been bogged down. I particularly dislike the sites that also use the slow Google Analytics servers.

    Posted in Economy, Languages, Rants, Technology | No Comments »

Type Transformation Library. In Java! (permalink)

November 28th, 2009

    I have just read an interesting post on LtU, which asks for a type-class transformation library. And it reminded me of wanting the same thing. I had not considered making these features into a stand-alone project. This is perfect for a project:

    1. The features are definitely useful, I have had to build some portions of this library for myself. I would have been happy to have a library that did this for me.
    2. The projects has a finite size: As long as we limit the number of forms we can transform between, the number of transformations are finite. Certainly, choosing the top four, or five commons forms will make a useful library.
    3. Adding forms is perfect for the open source community to contribute: The overall structure of the API would be clearly defined, and people can add their own transformations without knowing the details of the bigger project. Limiting scope of the task, and making it manageable.
    4. Much of the heavy lifting has been done: In various personal libraries of code, and in the open source community, these transformations exist already. All that remains is patching the disparate parts into a normalized, clean API
    5. The type transform API should be normalized and complete (any type to any other type) so it is easy to learn. This may demand us to implement non-useful transformations, or worse, annotate forms that can not support the richness that some forms can.
    Posted in Coding, Java | No Comments »

Try an Index instead of Changing Your Infrastructure (permalink)

November 15th, 2009

    One thing that disturbs me is the proliferation evil agents who love key-value stores. Especially those that love key-value stores in a latency-infested cloud. What upsets me more are the infinitely confused people who believe a database is *worse* than their key-value storage.

    Here is one where Ian prefers Cassandra over proper database indexes:

    For some reason, Ian has compared his terrible query to his optimized Cassandra implementation. The query (and schema) are so bad, I suspect it’s a straw man.

    Ian does not provide the SQL which makes him conclude that “Computing the intersection with a JOIN is much too slow in MySQL, so we have to do it in PHP.”. Any statement that implies a join is done faster outside the database should set of warning bells: The database should have all the information required to make your queries fast. If this is not the case, then something is seriously wrong with your indexes.

    An all-database solution, even if it is a stored procedure, will be faster than a networked solution just because of latency. Personally, I have found returning a few hundred extra rows from a single “close enough” query significantly faster than issuing two queries with perfect results: Latency is your biggest enemy.

    Let’s look at the Digg schema provided:

    CREATE TABLE 'Diggs' (
      'id'      INT(11),
      'itemid'  INT(11),
      'userid'  INT(11),
      'digdate' DATETIME,
      PRIMARY KEY ('id'),
      KEY 'user'  ('userid'),
      KEY 'item'  ('itemid')
    CREATE TABLE 'Friends' (
      'id'           INT(10) AUTO_INCREMENT,
      'userid'       INT(10),
      'username'     VARCHAR(15),
      'friendid'     INT(10),
      'friendname'   VARCHAR(15),
      'mutual'       TINYINT(1),
      'date_created' DATETIME,
      PRIMARY KEY                ('id'),
      UNIQUE KEY 'Friend_unique' ('userid','friendid'),
      KEY        'Friend_friend' ('friendid')

    Some changes to the indexes would help:

    1. KEY ‘user’ (‘userid’) – does not help much when a user has digged many items: The index will help by pointing to all the actual ‘Diggs’ records, but the database will have to load every one of those blocks from disk to get that information (very likely one block per record). I would have suggested UNIQUE KEY ‘user’ (‘userid’, ‘itemid’, ‘digdate’) – This would have allowed the query to simply use the index, and not have to go back to the massive, unsorted, ‘Diggs’ table.
    2. UNIQUE KEY ‘Friend_unique’ (‘userid’,‘friendid’) – Seems to be the correct index to for “Query Friends for all my friends.”; this should be a single block lookup. There is no reason this should take 1.5seconds.
    3. KEY ‘Friend_friend’ (‘friendid’) – Maybe instead, Ian intended to have a list of all users that made ‘me’ a friend, rather than all users ‘I’ have befriended. This certainly explains the 1.5sec response time. In this case, the index should be expanded so the table blocks do not need to be loaded: UNIQUE KEY ‘Friend_friend’ (‘friendid’, ‘userid’).
    4. Maybe MySQL is poor database and loads the original records during a query even if the columns are not needed

    Anyone who may be complaining about the extra disk space required to write these indexes should note that Ian’s Cassandra implementation consumes much more space than I am advocating here.

    Even *IF* the Digg database is so big that the index lookups take too long, we should realize that we can pre-compute query results in the database, just like in Ian’s Cassandra implementation. If the database does not have materialized views, we can always add triggers to do the job ourselves. The former is still a limited technology, and the latter is quite messy, but both are better than changing your whole platform.

    Finally, it seems Ian is trying to optimize for the worst case: “Kevin Rose, for example, has 40,000 followers”. I disagree with changing your infrastructure for a single use case for a minority of users, but that is a business decision that involves more issues than Ian’s blog entry can be expected to consider.

    In conclusion, I am angry that the human race has lost another soul to the legion of key-value fanatics. I am further incensed that apparently 298 other nameless souls have followed Ian into the pits of hell. (298 diggs at time of writing).

    Posted in Coding, Rants | No Comments »

RMS vs Miguel (permalink)

November 13th, 2009


    Last month RMS and Miguel had a disagreement. Only now have I had the time to write out my thoughts.

    Miguel is Overly Optimistic

    First, I can agree with Miguel when he says

    “I know that there are great people working for the company,…”

    but I take issue with the second half of his statement,

    “…and I know many people inside Microsoft that are steering the company towards being a community citizen.”.

    I have no doubt that Microsoft’s employees are trying to steer the company towards being a community citizen. But Miguel has an implicit trust that shareholders will not take back that steering wheel and drive in the opposite direction. That is where I oppose Miguel’s optimism.

    Microsoft shareholders have been sitting on a goldmine for the last 20 years. Sure, Microsoft has been making reasonable products over the years, but it’s profitability is primarily due to the great waves of money in the world economy, generated by ever-increasing public and private debt. The population spent money they did not have for any nifty software feature. Microsoft developers benefit in this environment of free money because the shareholders find it’s easy to be altruistic when profits are high. I even contend that free software has had a hard time competing because money itself is (apparently) free.

    The good times for Microsoft will not last. I believe the next decade will show how cruel the shareholder can be to “open source”. There are two main forces at work which will make the Microsoft shareholders act much more ruthless, and probably more shortsighted.

    1. A poorer user base: Money will be tight, either because of domestic inflation, or lack of liquidity. Microsoft faces an unrelenting barrage of competition from Free and Open Source software. People and corporations will have a greater incentive to use free software to reduce their spending.

    2. Profitable innovation is reaching it’s limit: The success stories of the last ten years depend on massive user bases: 10 million, 100 million or more. Each individual user only contributes pennies, if even, to overall revenue. Revenue per user is only going to go down further. I do not want to go into detail about why I believe this is true, but generally the software industry had matured: Software for the commoners has been built, and software niches are filled.

    Microsoft is stuck between this innovation limit, and Free software’s relentless catch-up. Microsoft will feel the squeeze and start acting like most corporations that see their business model die: Sue.

    I suspect that this fear of mine is just like Stallman’s, and I do not consider it irrational. Microsoft has every right to protect it’s patents. From the shareholder perspective, it must protect it’s patents when net-losses threaten the company.

    Miguel gets Distracted

    Miguel says:

    “Working at CodePlex is a great way of helping steer Microsoft in the right direction. But to Richard, this simply does not compute.”

    Miguel has fallen for the classic work-with-them-instead-of-against-them. Just like the environmentalist employed by a big oil corporation; he is told that he will help the company along the right path. But really, his employment/involvement is spin for advertisement, and for government tax rebates. The company would do the same without the environmentalist’s help, only now the environmentalists have one less advocate.

    Microsoft will have done fine without Miguel. But now Microsoft can now advertise Miguel to the Open Source community, and hopefully Microsoft has distracted Miguel enough from being a competitive threat.

    Miguel is motivated by Profit

    Open Source, which I define as Open Source *not* including Free Software, is sold to the public as a compromise between the GPL and proprietary licensing. Really, Open Source is an advertising scheme used to acquire important tech-savvy users which install software on the majority of our machines. Open Source has the secondary goal of gaining some free debugging. Both goals include not giving back.

    Open Source Profiteering is pragmatic, effective, and efficient at bringing products to market, but this is a short sighted goal. Open Source has it’s place, it is necessary, does some good, and it is what I would do if I ever released software people wanted. That does not mean I have to like it: I like steak, but I don’t like the thought of chopping up cows.

    “Richard Stallman frequently conjures bogeymen to rally his base. Sometimes it is Microsoft, sometimes he makes up facts and sometimes he even attacks his own community”.

    First, Stallman is sometimes wrong, after all he is only human. But to say he is conjuring bogeymen is misleading. Stallman is only issuing warnings of possible problems. He advocates actions that should be taken to avoid those possible problems.

    Stallman is thinking long term, which is necessarily hard to be accurate. Stallman may misidentify the benign as threats (like with .Net, maybe), or he may identify threats as benign (I personally wanted something like GPL v3 back in the 90’s). Miguel does not attempt to see the long term, nor appreciate the difficulty in doing so. When Miguel hears warnings about .Net, and Microsoft, but Miguel “knows” there is no danger over the next year, he simply assumes Stallman is fear mongering.

    Stallman is not motivated by profit, and Miguel does not understand this. Miguel assumes his goals are shared by all others. When Miguel says:

    “Looking at opportunities where others see hopelessness. … I rather work on constructive solutions to problems than moan and complain.”

    Miguel assumes the opportunities he finds, and the constructive solutions he invents would be lauded by any reasonable person. Miguel is wrong. Opportunities are defined by goals. Constructive solutions, any solutions really, are defined by goals. Miguel’s goals are profit. Miguel’s found opportunities and creative solutions are of no interest to Stallman.

    Stallman is not a salesman. If Stallman was sent to Africa he would not see the shoeless as “hopeless” situation, nor as an “opportunity”, because both perspectives require a profit goal. Stallman would probably walk shoeless with the natives, and eat some good food, and maybe teach them to make their own shoes.


    Miguel’s perspective is that of a short sighted, pragmatic, profiteer. As such, he makes a few wrong statements and conclusions:

    1. Microsoft’s employees can control the company direction – No, shareholders control the company direction.
    2. Stallman is pessimistic because he does not laud the “opportunities” and “constructive solutions” – No, Stallman simply does not share Miguel’s profit motivation, so those “opportunities” and “constructive solutions” are not.
    3. Stallman is fear mongering – No, Stallman’s simply warning others of possible threats to Free Software.

    Posted in Rants | No Comments »

PI (permalink)

October 29th, 2009

    I have a small programming project, called YAY, which I work on occasionally.  The objective of YAY is to be a type-safe and easy to use parser-generator-and-compiler.  The parser-generator is the easy part.  The compiler portion is more difficult.  Specifically, I am adding namespace processing so that the parsers specified are able to generate general graphs, and not just trees.  In theory, YAY should be able to parse simple “languages” like XML and HTML, including URLs and XML namespaces, with no post-processing.

    Programming languages, like Java, should require macro definitions to become full compilers. Unfortunately, macro definitions have not been added to YAY yet.

    Today I have discovered π (PI), which is very much like YAY.  This is good news because I do not particularly enjoy building YAY, I only like using it.  If π (PI) can replace YAY, then I can have someone else do the hard work, while I play at a higher abstraction level.  π (PI) looks interesting because it seems to have avoided YAYs intermediate parse tree representation, and seems to go directly to macro (re)writing. 

    But, I am suspicious whether it works. 

    Now, I could be wrong, but YAY is complicated for a reason:  It must allow identical language constructs inside different contexts to mean different things.  For example

      A: for (Object o : MyList){

        for (Object p : MyOtherList){

          if (something) continue A;



    In this case the continue A; refers to scope that is ‘far’ from itself, and the same sequence of bytes can also refer to a different exit point of another loop later in the program. I am also thinking that exception handing scope can be more complicated.

    It is not obvious how π (PI) achieves this non-local syntax specification.

    In any case, language specification is one of my favorite subjects.  I am compelled to review the π (PI) implementation despite it being in a pre-alpha state.

    Posted in Coding, Languages | No Comments »