Comparing ExpressJS Session Stores

For my next project, I was exploring the session storages for ExpressJS, and I found a question on StackOverflow, but I was frustrated with the accepted and 30 upvoted answer, which compares local in memory session store with remote session stores. The answer is very misleading, there is no chance that remote storage can beat local storage, even if the in-memory solution does O(n^9) calculations, it would be faster than going to web.

So I have decided to carry my own test on my local machine, with local installations of Express and MongoDB. The results were interesting. I have tested a simple server that does the following:

app.get("/", function(req,res){
    if ( req.session &&{ = + 1;
    } else { = 1;
    res.send("No: " +;

The results for concurrency levels 1,10,100,500 are:

oncurrency: 1
none       4484.86 [#/sec] 
memory     2144.15 [#/sec] 
redis      1891.96 [#/sec] 
mongo      710.85 [#/sec] 
Concurrency: 10
none       5737.21 [#/sec] 
memory     3336.45 [#/sec] 
redis      3164.84 [#/sec] 
mongo      1783.65 [#/sec] 
Concurrency: 100
none       5500.41 [#/sec] 
memory     3274.33 [#/sec] 
redis      3269.49 [#/sec] 
mongo      2416.72 [#/sec] 
Concurrency: 500
none       5008.14 [#/sec] 
memory     3137.93 [#/sec] 
redis      3122.37 [#/sec] 
mongo      2258.21 [#/sec] 

The source code of this is available on the GitHub:

I hope it will help you to choose a better session storage for your needs. I will go with Redis, because it is almost as fast as in-memory solution, whereas MongoDB s performance cripples down.

Etiketler , , , ,

My First Android Game: Izuna Drop

As a part of my CS-319 Object-Oriented Software Engineering course,  I developed a computer game with Nail Akıncı and Naime Nur Çadırcı, called Izuna Drop. It is a simple space shooter clone. As the design was more imporant in that course, the implementation was not very efficient, it was written on Java, and due to our bugs, it requried approximately 1 GB of memory. If you wonder what that looked like, it is avaialable on GitHub/Izuna.

In this summer, I bought a Turkish book, Android Oyun Programlama(Android Game Programming) It focuses on AndEngine which is a Java Game engine for Android. Book is nice, but it focuses on GLES1 version of AndEngine, which is now outdated and it is hard to find support, since the GLES2 version of AndEngine has many changes over the old one. However, it is a nice book to teach you the basics, it is not rocket science to make the transition to GLES2 on your own.

Anyways, I have read the book and started to implement few simple scenes. The biggest problem in game development is that you cannot find graphical assets easily, which is the essential part. I know a little bit about designing, but it is not easy to design nice game assets. But Nail Akıncı, my team member has developed himself on modeling and animation and provided us nicely hand-made assets. (But if you Google enough, you can find very nice free 2D assets for your game). So, I have decided to re-implement our game for Android. It took almost a month, working on mostly nights during this hot summer. AndEngine simplifies many things that we had manually take care of on our PC version, so the Android version is much more faster and has small memory footprint.  It’s source can be also viewed at GitHub/izuna-android. It is not perfect, for instance collisions are not pixel perfect, but overall it is a playable and enjoyable game with 5 levels and each level consisting 10 unique waves.

Here are some screenshots from the Izuna Drop:

s1 s2 s3


I would be glad if you could download it and test it:


Thank you!


My experiences with, torrenting on the cloud

I have been living in dormitory for 4 years, and sometimes I needed something. Even it is legit file, it is forbidden to use torrents in Bilkent dormitory. Or you get slow down or even a suspension for a while. Then, earlier this year, I have found out, via a referral. It’s on the cloud. You upload your torrent file, or just paste the public URL and downloads it to your server. After that, you can download your file over regular HTTP, where it is not different than your regular web traffic. It is possible to limit these, however different than regular torrent, you do not have to upload the pieces in you. also uploads the files for you, it takes of seeding upto 1.00 ratio for public torrents, and 2.00 for private torrents. These are very useful, you do not consume more trafic than downloading your file. 

Besides the benefits, also has good downloading speeds. As it is in the cloud, it is probably closer to the each source than you can ever be in your home, so it downloads faster than you can do. Therefore, you do not have to keep your computer on more than you need. Also, if somebody has downloaded the torrent you have given recently, you can immedaitely access as puts them in cache. 

In terms of  downloading speed from, it differs. It can be very fast, or very slow. I recommend you to use a download manager. It can increase the download speed a lot. I have a little tool named axel on linux, which can download files upto dividing it to 256 parts. I  have seen 70 MB/sec (megabytes, meaning which is equiavlent to 0.56 gbit/sec). But, that is not alwayts the case. Usually around 10-20 parts, files can be downloaded at 5-6 MB/sec at my dormitory.

On the bad side, I have problems on resuming downloads that I have started before. They just simply do not start. This happens when filename is not detected, and you need to supply additional HTTP login parameters via your download manager. But sometimes it just gives a URL with token, these can be paused & downloaded without any problems as long as you do not change your password.

Last good feature is, if you have a torrent that consists of too many files, can zip them on the fly and deliver it to you. It is really useful. I really recommend it if you suffer from low downloading speeds, or behind a router that does not allow you to download torrents. 

Etiketler , , , ,

How I learned to live with a 120GB (after 500GB)

My laptop is from August ’09, so it is now a little bit slow. It had a 500GB traditional spinning disk drive. I know, it is not dead yet, however random read/write is horrible. My Windows 7 installation took 1-2 minutes to boot and be ready to open new programs, so I have never closed my machine. I have always used hibernate/suspend. Opening heavy programs like Photoshop was not good. In fact, the biggest reason was, I am also using Linux for my development purposes. However, my precious games and some programs are not compatible with Linux, Wine and open source alternatives are not providing satisfactory experience for me. So I decided to replace my spinning disk with an SSD. Wow, this is expensive. At east for a student budget and for an old laptop. So I could afford a 120 GB Kingston SSD. My laptop isn’t even SATA 3, so I get at most 300 MB/s throughput, although my SSD supports ~550MB/s read/writes. 

Good Sides

Firstly, I was amazed. Freshly installed Windows 7 takes 10 seconds to boot and my Lubuntu (my favorite linux distro) takes 5 seconds to boot. It was very good. On Linux, installing packages were the most amazing thing. I have a very high bandwith at dormitory, however unpacking and installing was always taking long time. But now, it was magical. after downloading all the packages, installing eclipse with all recommended packages only took 10 seconds. Wow, just wow. On both Windows and Linux, all the programs open instantly, no waiting time except Photoshop. And it opens in 8 seconds. Games start and load instantly. Most of the time on online games, I always wait other people to be ready for games. And, the best thing is when I have a large file to download from the web, I can download files with multiparts and I do not have to wait for disk to be ready. As I said, random access is terrble. However, with a little tool called axel on Linux, I can open 250 simultaneous connections and I almost reached 75 MB/s (megabytes per second) on downloading from Great, everything is so fast now, however it has a catch.

Bad Sides

SSDs are so expensive. I remember when SSDs first showed up, it was even more expensive and slower. The situation is now better, but still expensive compared to spinning disks. I bought a 120 GB disk, and I could buy a 1 TB 7200rpm disk instead of it. But I admit that speed gain is worth that.

How to cope with low storage

The actual size is 111 GB. I tried many configurations. I tried to have only Linux, but it was not enough. I needed Windows too. So I gave 99 GB to Windows, 10 GB to Linux, 2 GB to Linux swap partition. (I actually need to give it 4 GB but I didn’t since my usage patterns never exceed 2GB swap) As you can see I even count the swap partitions 2GB. Every bit is important. On Linux, I delete apt cache, there is no need to store those packages, right? I installed them and they are not required anymore. I occasionally check space left with df -h and check if everything is alright. I even deleted some packages I do not use. I install the packages without suggested packages with –no-install-recommends flag. 

For SSD, Windows is a huge problem. Come on, what takes 15 GB on a fresh install? Disabling Windows features is not so helpful as many features are required. However, I found a way to delete service pack files, it saves 3 GBs. Additionally, when you open Disk Cleanup utilitiy, I cleanup everything there. As a note, when you click System Files, there is a new tab there, which lets you delete all restore point backups but the most recent one. It surely saves some considerable space. Then, as precautions, I always have the programs that I use, I delete the other programs which are not used. The biggest thing that consumes space is games, Call of Duty: Black Ops 2 takes 17 GBs when fully installed. So, I have at most 3-4 games present. Before, I kept most of my downloads Downloads folder, but I occasionally clean the directory, just delete everything. If I installed them, or just left them there, they are probably not critical to store them. That is why Internet exists, I can download them again most of the time. As another new habbit, In my Dropbox, (I have 88 GB with referalls and contests), %10 of them is filled however I use selective sync. I use it is as an online archive when the those folders do not need to be changed or I need to store them on my PC everytime, such as old projects, assignments, photos and some e-books. Lastly, for media, I do not store much music, only the best ones I like, approximately 5 GB. For other music, I use Grooveshark and YouTube. For videos, I use also use YouTube. However, there is not much streaming video services in Turkey for TV series and movies. So the options are rare, I use TV or my films stored in my 2 TB disk, or DVDs. 


As you can see, as the prices for SSDs is much and you cannot afford big capacities, you have to change your habbits. Ask yourselves, do you really need that file to be present in your disk, or can you transfer it to web or external disks? Or, you know, you can just delete them. If you played and finished a game, you can uninstall it. Same goes for programs, or any content that you have exhausted. I assure you, if you can change your storage habbits, you can do very well. If my laptop supported SATA 3, it would be 2x faster. However, 300 MB/s is also a very good value, since the best spinning disks can supply 120 MB/s and they  still suck at random access. Finally, as a bonus, you can move your laptop freely when working because there is no moving parts -except fans- left, which I needed most!

Etiketler , , , , , , , ,

WSFTP: File Transfer over Websockets

If you were following my blog, you may seem that I have once tried to make WS-FTP project with Java, however the web-sockets were on draft and it was changing quickly and they were not safe. Meantime, I have learned and mastered NodeJS and met, and I decided to use them, since they were providing a better interface and it was much more easier to code a web server with NodeJS rather than Java.

The project I have made is very simple and not optimized. I quickly wanted to demonstrate the concept and will try to explain the workflow now:

  1. NodeJS server is listening on port 3000. It both provides interface and a web-interface for administration (not implemented yet, but that’s the idea)
  2. Chrome Extension (Client) requests unlimited stroage and ability to connect any site, so connection can be established.
  3. Client connects to the server. Asks for any arbitrary folder contents, with filenames and size.
  4. Client wants to download a file asks for block 0 of file. And depending on size, it adds N requests to the work queue.
  5. Server just opens the file, reads the requested block. In my app, block sizes are 8 KBs.
  6. Whenever a block is gathered on client, it constructs a BLOB object and saves it in memory.
  7. After consturction of BLOB, client pops 2 tasks from queue and wants them.
  8. This processes continues until all the blocks are fetched. Client saves the data to Chrome’s sandboxed filesystem, in which user can download the file to local filesystem.

As you  can see, it is very primitive, only 1 connection and 1 transfer at a time, and if you install it it will not have a very nice GUI, however, it will certainly be a starting point. Currently there are some issues:

  • It’s single threaded and upon recieving a packet from a socket, Chrome can hang depending on file size.
  • Only 1 connection, 1 transfer at a time.
  • No nice GUI, but it will about to come!
  • File saving, handling in memory is not efficient with this way, needs a better algorithm, some temproary disk access.

Additionally, as you can imagine, this can be asily turned into a P2P system. Bascially, we download blocks and there is no reason to not downlaod blocks from multiple clients. I have made this project available on GitHub, so anyone can easily access it and freely use it and maybe extend it!

Etiketler , , , , ,

NodeJS: Simple Clustering Benchmark

If you are interested in node, you know that NodeJS uses an event driven I/O model and it is single threaded, so it is not meant to be working multi-core. However, by using the clustering support, you can bump your applications speed. I will give a a plain hello example, however consider that you will never send just hello world to anyone in the world without any background processing like database connections, cache connections, file operations or just simply I/O.

The server configuration is: Intel(R) Xeon(R) CPU  E5606 @ 2.13GHz (8 Cores), 16 GB RAM.

Clustering Support with 8 instances of Node

var express = require('express');
var cluster = require("cluster");
var os = require("os");

var app = module.exports = express.createServer();

app.get('/', function(req,res){
	res.send("Hello World");

if (cluster.isMaster) {
	console.log("CPUS: " + os.cpus().length);
	for (var i = 0; i < os.cpus().length / 2; i++) {
		var worker = cluster.fork();
} else {

Without any Clustering Support

var express = require('express');
var cluster = require("cluster");
var os = require("os");

var app = module.exports = express.createServer();

app.get('/', function(req,res){
	res.send("Hello World");


I have used ab to test the following system with 100.000 reuests at concurrency level 1000.

With Clustering: 11612.01 requets / sec
No Clustering: 3497.04 requests / sec

A little tweak can be helpful to increase your througput. It will utilize all the cores of the server, if you seperate your databse and server (you should), it would absolutely give a better result.

Etiketler , , , , , ,

STARS için Daha İyi Android Uygulaması

BCC’nin geliştirdği STARS android uygulamasını gördük. Yorum yapmaya hiç ama hiç gerek yok.

Uzun süredir nasıl “Android uygulaması yapılır?” konulu bireysel öğrenimimi gerçekleştirmediğim için bir program yapmıyordum, üşeniyordum. Android’in platformu güzel, esnek; biraz kafa karıştırıcı ama hoş. En azından size Mac alma zorunluluğu koşmuyor.

Ben de gerekli tutorialları okuduktan sonra basit bir uygulama yazabildim. Bunu da şu an en azından kendi işime yarayacak olan STARS için Mobil uygulama yazdım. Tamamlanınca Android Market’e de yollayacağım ve de open-source olacak. Şu an source’unu göstermek istemiyorum, çünkü veri okuma kısmı biraz kötü. Ama şu an için olabildiğince efficient. Bir ara bir HTML parserla düzeltmek gerek.


Benim yazdığım uygulama neden daha iyi?

  1. Internete bağlanma sırasında arayüzü kilitlemiyor.
  2. Şifrenizi hatırlıyor, her seferinde girmeye gerek kalmıyor.
  3. Android’in kendi arayüz araçlarını kullanıyor. (BCC’ninki de öyle fakat bir WebView içinde desktopta nasıl görünüyorsa öyle gösteriyor)
  4. Ekrana sığıyor.
  5. Login ekranında arayüzü gerekirse kaydırabiliyorsunuz. Böylece klavye açıkken login tuşuna basabiliyorsunuz. (Küçük bir ekranınız varsa anlarsınız)


Şu an yazdığım uygulamayı yayınlıyorum, şimdilik sadece Grade sayfası çalışıyor. Diğerleri yolda. Eğer Java’da bana diğer başlıklar için methodlar yazmak isteyen varsa çekinmesin.

Güvenlik konusunda çekinceniz varsa indirmeyin o zaman. Zorla sanki. İndirmek isteyenlere açık. Ki güvenlik endişeniz varsa da bunu denedikten 5 saniye sonra şifrenizi değiştirebilirsiniz. Eğer bugüne kadar şifre toplayacak olsaydım; üzerinden bugüne kadar 7.828 kez görüntülenen hizmetimden yaklaşık 3000 tane toplayabilirdim. Güvenmek sizin bileceğiniz iş. Feedback vermek isteyen herkese açığım.

İndirme linki burada:


Etiketler , , , ,

Introducing WebSocket File Transfer

First of all, all my work below  is in draft form, and not optimized, even a little bit. I have performance issues, but I just want to represent the capabilities of this new technology. I am neither IETF or any other organization that can define a protocol, but I hope this article would inspire some people.

HTML5 Magic

As you might not know, the HTML 5 specification offers File API for manipulating both binary and text files on client side. Your imagination is the only limitation for the capabilities of this technology. You can have a look at a detailed tutorial at the HTML5Rocks website. Remember, the files are saved to a sandbox area, not in real file system.

HTML5 also offers a new interface called WebSockets API, that allows you to open a socket to the webserver and remain opened until server or you close it and provides you a two-way communication.   This saves a lot of time, when you have to save time. Because XHR requests just closes after the response is received, so a great time is saved when you do not have to re-open a TCP connection. WebSockets offer both binary and text transfer. However, only Google Chrome 15 Beta version implements binary conversion. One ugly fact, a single message  cannot exceed 32KB, or I cannot, Chrome throws an error that frame size is too big.


Anyways, with these two technologies, I had a great idea, I could implement a simple file server.  The server is implemented with Java, but actually what it does is listening to a user defined port and respond on each request, defined by specification. I implemented the server using Webbit server.

Of course, writing data to client, opening sockets to each computer in the world is not allowed by any browser. However, Google Chrome has an application model where you can request user permissions to do more. So I could open sockets to each server on the world and use unlimited storage. Implementation was JavaScript on the client side, which had a little performance issues, but a great scripting language.

Server / Client Interaction

  1. Client opens a web socket to the server.
  2. Server accepts it.
  3. Client sends authentication information.
  4. If user is accepted file list is sent to user as a string message.
  5. User wants information about file X.
  6. Server sends how many pieces does file have.
  7. Client requests File X, piece 0.
  8. Server sends first, 32 KB of data as binary. Until 4 MB is reached, server keeps sending 32 KB chunks of data.
  9. Server sends OK after first message with piece number.
  10. Client saves file as 4 MB to disk, free the BLOB from memory.
  11. Go to step 7, with piece + 1 and repeat until the last piece is achieved.
  12. After all pieces received, read all pieces 1 by 1 and append them to the final file.
  13. Close the connection.


This works! Really! But not with very large files. Because, some of you may imagine, there are concurrency problems. Also, the binary data  received from server is not marked with piece and chunk number. I thought, I was using local, the packets should come in the order I send them, but this is not the case. If I try to fetch a ~250MB file, It fails. But I could successfully transfer a 30 MB file, inside my computer.  However, it is not suitable for real-life client-server model. It could even fail in a LAN.

My point was to prove that HTML 5 can be used to create a FTP-like client. File API and WebSockets API provides a lot of opportunities. What I have done is not a distributable project, yet. But I have a page on Google Project. I am just learning SVN, so please forgive me for any inconvenience Smile 




Etiketler , , , , , , ,

Google Plus Davetiyesi

Aslında pek Google Plus kullanan birisi değilim. Fanboyu hiç değilim. Yapmışlar, güzel sade, ek özellikleri var, iyi hoş yani. Öyle yepyeni bir dünya da değil zaten. Circle olayı güzel olmuş, ancak aynı şeyi Facebook da geliştirirse bunu kullanmak için pek bir sebep kalmayabilir. Güzel olan başka şey de sizi ekleyenleri sizin eklemek zorunda olmamanız. Yani biri sizi “çember”ine aldığında, onun paylaşımlarını siz de görebiliyorsunuz. Onu eklemediğiniz sürece ise görmüyorsunuz, “Incoming” adı altında bir yerde görüyorsunuz. Ek ve güzel özellikleri bunlar. Şimdi aklıma geldi, fotoğraf galerileri de daha güzel gözüküyor bence.

Davetiye isteyenler için:


Etiketler ,

Webtasarım Piyasasını Canlandıracak Yasa

Evet, Yeni Türk Ticaret Kanunu en çok bizim gibilerin işine yarayacak diyebilirim. Neyse, eğer siteye ihtiyacı olan varsa ya da ihtiyacı olan tanıdığı varsa bana kendi sitem üzerinden ulaşabilirler.

MADDE 1502.

(1) Her sermaye şirketi bir Websitesi oluşturmaya ve sitenin belirli bir bölümünü, şirketin kanunen yapması gereken ilânlarına, paysahipleri ve ortakları açısından önem taşıyan açıklamalarına, yönetim kurulu ve genel kurul toplantılarının hazırlıkları ve yapılması ile ilgili, ortaklara sunulması gereken belgelerin açıklanmasına, bu kurullara ilişkin davetlerin yapılmasına, oy verme, şeffaflık ve kamuyu aydınlatma yönünden zorunlu ve bilgi toplumu bağlamında yararlı görülen hizmet ve bilgilerin sunulmasına, bilgi almaya yönelik sorulara, verilen bilgiye ve benzeri diğer işlemler ile bu kanunda ve diğer kanunlarda paysahiplerinin veya ortakların aydınlatılmasının öngörüldüğü konulara tahsis etmek zorundadır. Ayrıca, finansal tabloları, bunların dipnotları, yıllık rapor, yönetim kurulunun kurumsal yönetim ilkelerine ne oranda uyulduğuna ilişkin yıllık değerleme açıklaması, denetçinin, özel denetçinin, işlem denetçilerinin raporları ve yetkili kurul ve bakanlıkların istedikleri pay sahiplerinin ve sermaye piyasasını ilgilendiren konulara ilişkin olarak şirketin cevap ve bildirimleri ve diğer ilgili hususlar şirketin Websitesinde yayınlanır. Bu hükümdeki yükümlülüğe uymama, kanuna aykırılığın ve yönetim kurulunun görevini yerine getirmemesinin bütün sonuçlarını doğurur. Ceza hükümleri saklıdır. Finansal tabloları ile her türlü rapor üç yıl sitede kalır.

(2) Websitesinin bilgi toplumu hizmetlerine ayrılmış kısmının ticaret siciline kaydı da dahil olmak üzere, bu maddenin amaçlarına tahsis edilmiş bulunan bu kısmının herkese açık nitelik taşıdığı, burada yer alan mesajlar ile bilgilerin ilgililere yönlendirilmiş açıklamalar ve hukukî iradeler olduğu, sitenin bu kısmına ulaşılmasının usul ve esasları ile ilgili diğer hususlar Sanayi ve Ticaret Bakanlığı tarafından bir yönetmelikle düzenlenir.

(3) Websitesinin bu maddenin amaçlarına tahsis edilmiş kısmında yayımlanan bilgilerin başına parantez içinde “yönlendirilmiş mesaj” ibaresi konulur. Bu ibareli mesaj ancak Kanuna ve yukarıda anılan yönetmeliğe uyulmak suretiyle değiştirilebilir. Tahsis edilen kısımda yer alan bir mesajın yönlendirildiği karinedir.

(4) Yönlendirilmiş mesajların basılı şekilleri 82 nci madde uyarınca ayrıca saklanır. Websitesinde yer alacak bilgiler metin haline getirilip şirket yönetimince tarih ve saati gösterilerek noterlikçe onaylı bir deftere sıra numarasına göre yazılır veya yapıştırılır. Daha sonra sitede yayınlanan bilgilerde bir değişiklik yapılırsa, değişikliğe ilişkin olarak yukarıdaki işlem tekrarlanır.

MADDE 562. -

l) 1502 nci maddede öngörülen Websitesini bu Kanunun yürürlüğe girmesinden itibaren üç ay içinde oluşturmayan veya Websitesi mevcut ise aynı süre içinde gerekli tahsisi yapmayan anonim şirket yönetim kurulu üyeleri, limited şirket müdürleri ve sermayesi paylara bölünmüş anonim şirkette komandite ortaklar altı aya kadar hapis ve yüz günden üçyüz güne kadar adlî para cezasıyla ve aynı madde uyarınca Websitesine konulması gereken içeriği usulüne uygun bir şekilde koymayan bu bentte sayılan failler üç aya kadar hapis ve yüz güne kadar adlî para cezasıyla cezalandırılırlar.

Etiketler , , , ,
Takip Et

Her yeni yazı için posta kutunuza gönderim alın.