Sunday, March 9, 2014

Leaving the Cradle - Interstellar Probes

The most fascinating speculative documentary on interstellar probes I've seen is Discovery Channel Alien Planet. It tells the story of exploration of a fictional extrasolar planet called Darwin IV. The planet is investigated by a human interstellar ship that carries several semi-autonomous probes. The objects of investigation are speculated life forms that could exist on such a planet.

From the scientific standpoint the documentary is quite accurate, but I think the proposed scenario is improbable. It takes 40+ years and enormous amounts of energy to reach the remote star and at the same time it brings only moderate research data on the star system and observed planet. Probes operated for a limited time and covered only a small area of the planet.

I don't know if NASA consulted production team on the matter (at least one physicist participated - Michio Kaku). The whole mission looks outdated - it is like probes of 70s but with good cameras and AI.

But how should it look? Here is my scenario...

The fundamental problem with the mission in the documentary is that there is no redundancy. Even in 70s NASA usually doubled efforts in exploration - that is why there were 2 Viking landers on Mars, Pioneer 10 and Pioneer 11 encountered Jupiter followed by couple of Voyagers. The cost of a program is much higher than the cost of actual hardware, so it usually makes sense to duplicate.

(Voyager)

Von Braun interstellar ship shown in the documentary has 3 exploration probes on board for redundancy. By scenario, one disintegrates upon atmosphere entry, other two complete mission more or less successfully. But what happens when Von Braun itself fails for some reason during this 40+ years trip and final parking on orbit of Darwin IV? We've waited for 40+ years for nothing?

I think, an actual interstellar mission would consist of several much smaller ships launched from solar system (possibly in few years span) on slightly different trajectories. That increases the chances of successful arrival.

Van Braun has massive protective screen, giant communication dish, several probes and a lot of scientific equipment on board. It is universally stupid to cary all that payload to another part of the Universe.

To simplify the payload we need to remember the fact that the alien star system consists of the same elements as our own. The payload should contain the landing platforms with mining and manufacturing facilities.

Once arrived, the ships must park on the outer orbit of the star system near local Kuiper belt or any asteroid belt if that system has any (our observations should already predict the location of these belts and ships must contain necessary astronomical equipment to find asteroids).


Now the goal is to determine the source of raw materials and to land mining/manufacturing platforms there. It is better to use small asteroids, since they don't have any atmosphere and their gravitation doesn't pose any troubles. With planets the same would be problematic, at least on the first phase of exploration.

Landed on asteroids are ant-sized nano-robots and their goal is to find water and minerals to produce more advanced robots and production facilities. Water deposited on asteroids could be used to produce propellant. Carbon, nickel, iron and other elements will be used in production of advanced materials (e.g. Zero-G environment allows  production of foaming metals which is impossible on a planet surface).

Next will be creation of a system-wide communication, observation and refuelling network. Newly constructed robot-ships will be launched into inner system closer to planets we are actually interested in. Several asteroids will be turned into giant communication dishes. Once operational, all telemetry will be relayed through these stations back to Earth. At the same time the mission could receive profile update, since in 40+ years there could be improvements in bootstrap and construction programs.

Finally, newly constructed probes will be deployed by manufactured landing ships on different locations (determined by the observation network). That way begins the surface exploration.


This mission could go for years... Even for hundred and thousand of years. After the active phase, when most of the surface is covered, probes could be hibernated to be awaken again to check out changes in the ecosystem. Deployed orbital network will stay in passive observation mode waiting for any significant event.

(Mono lake is one of shooting locations for the documentary)

If an alien civilisation emerges, the mission would give us a signal.

On the other hand, if humans would need a new system for colonisation, the hibernated production network could wake up and prepare the system for arrival of human colonists. Maybe even terraform a planet for us!

So self-replicating mining/manufacturing nanobots would be the core of the future mission - not a giant ship.

If you really enjoy the idea of exploration and colonisation of an extrasolar planet, you may want to play an old 1994 game called Alien Legacy.



And from another point of view, try evolution of life on an alien planet in Spore by Will Wright.

Friday, November 22, 2013

Web 2.0 or Web 0.9?

It is hard to find time to read books there days, but I try to keep an inspirational book nearby. That way I can read a little to relax and switch from daily operational activities. I rotate these books to keep the one that inspires me at the moment on top of the pile.

This week, the book on top is Weaving the Web by Tim Berners-Lee.


A fascinating story of the Web early days. How it was conceived and how it was shaped into the phenomenon we know it today.

In that book you can find a very interesting fact - the first web browser, implemented on NeXT, was an editor as well. Tim Berners-Lee considered back then, that a WWW user would not only view information, but will participate in editing.

Once GUI browsers got implemented for other platforms, developers made them as viewers. Partly because it was so much harder to implement editing features on other platforms. But mostly because users in general didn't mind to work directly with html.

The history made an unexpected turn after all. Browsers haven't became web-editors as Tim Berners-Lee initially envisioned. But they became powerful application platforms instead and now they can support such an editor running inside them.

When the Web 2.0 term was coined, the main idea was that from now on a web site is not relying only on the content created by a centralised publisher, but rather on the content generated by users - e.g. blogs, wikis, social networks... So fresh and innovative!

In fact, the system, where users are responsible for generating the content, was the initial idea of WWW. That is why the first web browser was an editor as well. And that is why the Web 2.0 could be called the Web 0.9.

The point is that marketers could take an idea and proclaim it as totally new and revolutionary. When in fact it was the original idea envisioned by the genius. And I'm not talking about AJAX or minor technological improvements here - I mean the concept of WWW.

Another point is that even geniuses could be wrong in prediction how their idea would evolve. But in case of Web, the unexpected evolution gave us the world originally envisioned by Tim Berners-Lee - where users use browsers to universally access and edit information.

So I create this content and put it on the Web 0.9 :)







Thursday, October 10, 2013

WordPress and 404

Today I've been working on WordPress deployment. I enabled named permalinks (when posts are addressed by the /topic-name and not by cryptic /?p=21) and my site gone crazy with 404 Page Not Found appearing for every transition from the home page.

I considered the bbPress plugin to be responsible for that, since I'd just installed it. Google Search showed a lot of discussions on the problem and some pieces of solution, but not the actual steps to fix it.

The real reason turned out to be the apache server configuration. Following are the step by step instructions for a stand-alone WordPress installation on the LAMP stack:

1. mod_rewrite MUST be enabled, so run 

sudo a2enmod rewrite

2. Virtual Host configuration for WordPress should have FollowSymLinks specified and AllowOverride option set to All. Open virtual host configuration - it should be a configuration file placed in /etc/apache2/sites-enabled/ and named like *default.

Now, directory configuration with deployed WordPress should look like this:

<Directory /var/www/>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
        Order allow,deny
        allow from all
</Directory>


In my case, all I had to do was to change None to All.

3. Restart apache web server:

sudo service apache2 restart

4. Check, that Apache has the permission to write in WordPress root folder (/var/www/ by default).

ls -l /var/www/

owner and group should be www-data. If that is not the case, execute:

sudo chown -R www-data:www-data /var/www/

5. Now check, that WordPress generates .htaccess file. Open Settings -> Permalinks in the WordPress admin console. Make some changes to these settings (say from Default to Post name) and click Save Changes. Now list the root of your WordPress deployment (this would contain folders like wp-admin and wp-content).

ls -a /var/www/

.htaccess file should be present and its content should look like:

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress


At this point there shouldn't be any 404 errors and post links should look the way you've configured it.

=========
Conclusion
========= 

Now I've done with technical description and start the emotional one.

The famous WordPress installation instructions  I've used to deploy my instance don't mention any apache virtual host configuration (they mention mod_rewrite though). But I have my Permalinks settings available in my default installation and I can easily bring down the site by changing these options.

That should not be possible. I can imagine broking down the blog engine by manually editing some pages and layouts. But not by changing an innocent looking option box.

So first, the apache virtual host configuration should be included in the installation instructions. That would make the instructions less accessible and more complicated, but will eventually bring more robust site installation.

If not, it should at least show warning on the Permalinks Settings page about possible problems due to http server configuration.

But the best solution for WordPress is to detect these problems, disable the settings and provide the information on what should be configured first to unlock them. However, while it is possible to test local write permissions, the same is problematic for the apache configuration.

The similar situation appears when you are installing themes/plugins for WordPress. If you don't have a write access to the installation root it just offers you the FTP options. But it should show another disabled option - possibility of automatic HTTP install. And mention, that write access should be configured in order to enable that option.

When I see that kind of "user-friendly" interaction with the most popular blogging platform, it makes me wonder how experience differs on less popular alternatives.

Tuesday, September 24, 2013

Secret Platforms of DOOM

I'm back in blogosphere. It's been crazy 2 years - a new job, a new car, move to a new apartment. But the most fascinating news is a newborn baby boy. He is 9 month now and we are trying to eat, to crawl and to bash in UNIX.

But even in these turbulent times there are a lot of thoughts I just need to extract from my brain and put them on paper somewhere. And today I want to talk about platforms.

In computerised society you often hear about platforms. It is an environment you can build your software upon. Intel x86 is a platform, Java is a platform, MS Windows is a platform. But Spring or Ruby on Rails, for example, are just frameworks - they don't provide the ecosystem closed enough to be called a platform.

Recently I've been looking for convenient ways to prototype. And I discovered the hidden world of platforms. It is not, that I found some new products, rather looked at the old ones from a new perspective. I started to evaluate everything as a platform.

Sometimes, it turns out to be surprisingly unexpected. After all, there are a lot of software products you can use as a platform.

DOOM from id Software is one of them.

doom load screen
Just think about it. DOOM keeps the content in a single WAD file. By modifying it you can change the game significantly - add new sounds, weapons, monsters, levels. It provides enclosed ecosystem build on top of that. There are a lot of tools to modify the content. And you can create a DOOM mod comparatively easy.

Maybe the possibilities are strictly limited by the capacity and restrictions of the engine. But it doesn't matter. What matters is that it gives you an enclosed little universe to play with. Now you are a little god capable to shape that universe. And these so called restrictions are actually vectors for creativity. Just look at some of the total conversions.

So DOOM is a platform and not a small one. Shipped in 1993, at the end of 1995 it was installed on more PCs, than the newest platform from Microsoft - Windows 95. That is given a billion $$$ marketing campaign to support Windows 95 release. At the same time all the marketing campaign for DOOM was a shareware version uploaded to ftp (in today's social media terms I think it would be called viral).

To the date, it is more than 16000 WADs available for download - beating the number of titles for the most of gaming consoles and some operating systems (387 titles for Nintendo 64, 312 titles for 3DO). You can say, that WADs are the hobby mods and you probably can't compare these to commercial titles. But that exactly what makes DOOM a viable platform for prototyping.

eureka doom level editor
I need tools to prototype and I find that kind of small platforms are ideal for prototyping. That is why best game engines attract massive modding communities and have produced high-quality titles - look at Dystopia. You don't need a lot of time to start and you can use already created content as a starting point - and that is crucial for prototyping.

As an app developer you need to think about your product in terms of a platform. Wolfenstein 3D wasn't a platform despite the fact, that potential was there. But there was no WAD and content modification was too hard.

With DOOM, John Carmack realized the true value of modding community and created the game as a platform. There was a lot of concern about that approach. Legal and financial ones. Business-minded people reasonably argued, that freely available content from modding community will reduce the sales of the commercial episodes and the upcoming DOOM II. But Carmack was a hacker-minded person first, so he stick to the original idea of modifiable content and even released the source of DOOM level editor (written in Objective-C for NeXT workstation). In retrospective, it was a smart move. It created a strong fun base and in fact prolonged the lifespan of the game.

doom wad editor

I think, the WAD concept was crucial for the born of the legend. You can still get the open sourced version of DOOM with one of freely available WADs, download some level editing tools and start prototyping. It is really a great platform for that.

DOOM long and prosper.

Tuesday, October 25, 2011

Server Backstage of MMO Games

There are some sweet topics in high-tech which easily catching the attention of the general public. Just try to do a talk on some trendy AJAX framework and most people who are not in the web development won't be interested. Now try the same with SQL optimizations and you'll lose everybody who is not a back-end developer. Maybe some topic for starters? A lot of experienced people won't be interested. Tricky details of GC optimization for JVM? And only experts would come up. Seems like every talk has it's target audience.

But I've found that some topics will appeal to a wide audience no matter what. One of these magic topics is MMO game development. Whenever I mention my involvement in MMO world I always get a lot of interest. Even from people who have never worked in the game development. The topic is magnetic and everybody wants to get some insights on how MMO games are ticking.

Actually, you don't have to be deeply familiar with technology to understand the number of challenges MMO game developers are facing. There are thousands of players, huge traffic, latency, synchronization, load balancing and a lot of other issues – both technical and social. MMO games are enormous projects requiring a lot of people to be involved. And also money... A lot of money...

Sure thing that would be fascinating to everybody! Even for people who is not that much into technology. I had artists and game designers asking me how the dark server side is functioning.

This September I've been able to do a talk on server side of MMO games. A cool company Ciklum held GameDev Saturday in their Dnepropetrovsk office. It attracted a lot of people involved in game development – actually the best target audience for a talk on MMOG. Since not only developers were attending, I tried not to dive into technical details and keep it catchy for everyone.


I've covered basically the whole life cycle of a big MMO project from server-side perspective. I used my experience with Cities XL - it was one of the best projects I've participated in and I tried to reflect all the positive vibes on the audience.



Now the video of my talk is available here (the talk is in Russian) along with slides. There is also a photo album of the event.

Kudos to Tatyana Prudnikova and other Ciklum guys and girls for the effort in organizing the event. It was remarkable experience in the remarkable city on the Dnepr river :)

 

Tuesday, September 13, 2011

Overstructuring


Today is 0xFF day of the year, meaning a holiday for anybody who can read in HEX. Therefore I'm going to present a little essay on directory structures and their overcomplication.


There is something magical about complicated structures. They seem to have lives on their own, attracting people attention like magnets. Their existence is predefined by the problems they are trying to solve. Some problems are really that complex and can't be simplified. But often engineers keep building complicated structures where the simple ones will do.

I will take Maven as an example, since I mentioned it in my latest posts. But you can find many other examples throughout the industry and beyond.

Maven Standard Directory Layout is stupid. It is an example of unnecessary complication. It is too deep with too many subcategories. I agree that the structure is smart and it fits well... For a big project. But the fact is that most of projects are not that big at all. Some of them are small, some are bigger, but none of these would really require the layout offered by Maven. What you end up with is a bunch of directories containing only a single subdirectory inside. And if this is the case, there was no point in the subcategory in the first place. The layout is overstructured.

Yes, I know that you can redefine the layout if needed. But the fact that this is the default layout and Maven "strongly recommends" using it for your projects makes it de facto standard, as most developers prefer to stick to it.

Just look over several Maven projects to find out that most of them have overstructured directory layouts. Actually, I can assume that 95% of them do. And only 3% will be able to fill that structure for real. Maybe somebody is wondering where is the missing 2%? These are the projects so complicated, that the directory layout offered by Maven is to primitive.

So basically Maven Standard Directory Layout fits only the small selection of real projects. Bad work in selecting your target audience, Maven guys! No wonder it took so much time for Maven to get into the mainstream since it's introduction in 2001.

Their layout is a classical design-by-committee example. Who said it is the right structure? It is claimed that number of industry experts are participated in defining that layout. I have no doubts on that account. Probably it was like:

"...we need source files and tests."

"Yep. And also place to store configuration files..."

"What about filters?"

"...and don't forget webapps!...".

Everybody tried to include their own vision of the structure. And the result of a compromise is an overcomplicated layout - with one folder for each expert!


Now, somebody could argue that common layout is necessary for developers to feel familiar with any Maven project, it requires less configuration and it also makes tool integration easier. All these things are true, but that is not an excuse for a complicated layout.

Most projects require only folders for source files, tests, configs and optionally for scripts and resources. It is logical to provide a simple layout which fits the majority of projects(95%) and at the same time give means to redefine it for complicated ones. It is possible to provide several layout profiles for different kind of projects. But the simplest one must be the default choice. Sure it would complicate tool development, but we are not in the business of making tool developers life easier.

Right now I'm promoting Gradle as the best integration tool for Java projects. No XML and clarity of DSL makes it a joyride. To my disappointment Gradle also inherited the Maven layout instead of rethinking it. Committee has prevailed over common sense and even Gradle couldn't resist the magic of overstructuring. C'est dommage :(

KISS! And happy 0xFF day! We're still controlling the power of machines... Are they able to control us?

Monday, August 29, 2011

XML Overdose

While giving a talk at IT-Jam 2011 in Odessa I mentioned that XML is poisoning our lives (Russian "...портит нам жизнь" to be accurate). And apparently some people got the impression that I'm in favor of getting back to monochrome ASCII terminals or something...


Well, allow me to clarify my perspective on that topic. I'm using XML extensively in my day-to-day life. And I think it is a clever syntax to present data. Maybe a little bit overloaded with meta information, but clever anyway. It has it's own weaknesses, but in general it is a good data format to use. It is a standard, its backed by a lot of tools and it is ubiquitous throughout the computing world - supported by most of modern languages, frameworks and libraries.

My attitude toward XML was changing all the time I was using it. First it was confusion from that mystic way to organize data - it was hard to understand the concept of semantic mark-up since years of HTML formatting. But when the idea became dominant I started to appreciate the power of XML and tried to use it everywhere I can - transfer protocols, data formats, configurations... It soon became apparent that I'm actually writing real programming languages in XML. And a lot of people, including framework developers, doing that too. So we ended up in the world where we are doing more declarative coding in XML than coding in imperative programming languages... And some ideas are taking much more space to express in XML, so it does not make things concise.

Once I realized that, I switched from blind injection of XML into anything I could to a more pragmatic approach. I think we are overusing XML a lot these days. And it fits well for some purposes, but for others makes things more complex. By trying to fit XML where it doesn't belong we are loosing a sense of simple solutions and getting the XML overdose.

And that XML overdose playing tricks with our brains - just look at the Maven dependency declaration:

<dependencies>
    <dependency>
        <groupid>junit</groupid>
        <artifactid>junit</artifactid>
        <version>4.0</version>
        <type>jar</type>
        <optional>true</optional>
    </dependency>
</dependencies>

To set 5 values (ids, version, type and optional) we have to specify 7 strings. And in a real project we would have 10s if not 100s of dependencies each requesting 5-8 strings to define. That's what I call the XML overdose!

In this case Maven is blindly following best recommendations you can find at W3C Schools. They suggest to use elements instead of attributes, since elements are flexible, extendible, can have multiple values etc... When do they propose to use attributes? Well, when you have some identification values like NAME or ID.

So, you can look at it this way - groupid, artifactid and version are like identifiers for a dependency. So you can put them as attributes. Developers of Apache Ivy have done just that. And the same declaration in Ivy would take only one XML string:

<dependencies>
    <dependency org="junit" name="junit" rev="4.0" optional="true"/></dependencies>

We can get rid of XML at all. And Gradle by doing that makes the declaration even more concise:

dependencies {
    testCompile 'junit:junit:4.0'
}

Now that is a big needle to stick into anyone affected by the XML overdose!

Just ask yourself - why do I need bloated syntax to define my project dependencies? Only because it is a STANDARD and it is a way most developers doing it?

I'd better keep my own perspective on this matter since I'm tired of tag-bloat in my pom.xml files.

If you are designing an XML format humans are going to work with, try to keep it as simple as possible. That is the only ground rule here. Avoid unnecessary structures and nesting, try to minimize mandatory values by using defaults. And most of all - DO NOT use XML namespaces - they are making tags ugly and unreadable. If it is possible, consider using some alternative format instead of XML.

In summary, with some effort on our part, it is possible to make XML data confortable to work with. Just do not overdose it!

P.S. Don't get the impression I have something against monochrome terminals. I'm a fan of retro-computing. Maybe I missed the glory days of PDP-11 era, but I had my time programming ZX Spectrum in early 90s. And it was a great time with a really neat machine!