Thursday, 25 September 2014

OSD600: Release 0.1

I have done the project 0.1 release and send David the pull request (https://github.com/humphd/filer/pull/5)  today. Project 0.1 release was to add the du command to filer project. Following project instruction, I did not have much difficulties to finish it.
First of all, I got the code and had my github ready by forking filer to my github account and coloning it to my local computer.

Then I installed the node.js in order to have npm package installer in my computer.

Next, following the instruction to install the grunt both locally in project (filer) folder and globally in my computer. At this step, I had the trouble to install grunt locally with the errors during the installation. After discuss with the classmate, I found out that I need use Linux command line instead of Windows command line to run the command “npm install”. Then I used git bash to run this command, and it turned out that installation was successful eventually.

Following, I run “grunt test” to test the code. I met the problems as well here. After asking David, it was resolved by updating to the latest code. The problem I met was just solved and updated quite recently, and I had not updated my code yet.

At this point, I had all the required environment ready. Then I started to write the code. I reviewed the code for cat command and ls command because du command has the similarity with these two commands. Both du and cat commands have two parameters – data and callback. And both du and ls commands need to return deep content of a directory tree, and have the file size, file path and file content. Having the reference to these two blocks of code, I built du command without much difficulties. There were 5 files that needed to be modified and/or added. They were:
  •        MODIFY: /README.md
  •        MODIFY: /test/index.js
  •        MODIFY: /src/shell/shell.js
  •        MODIFY: /dist/filer.js
  •        ADD: /test/spec/shell/du.spec.js

At last, it passed the test with no error. 

Tuesday, 23 September 2014

SPO600: Static linking vs. Dynamic linking

Linker as a system program takes relocatable object files and command line arguments in order to generate an executable object file. The linker is responsible for locating individual parts of the object files in the executable image, ensuring that all the required code and data is available to the image and any addresses required are handled correctly.




Static and dynamic linking are two processes of collecting and combining multiple object files in order to create a single executable. 

Static linking is the process of copying all library modules used in the program into the final executable image. This is performed by the linker and it is done as the last step of the compilation process.

During dynamic linking the name of the shared library is placed in the final executable file while the actual linking takes place at run time when both executable file and library are placed in the memory.

Differences between Static linking and Dynamic linking:

Static linking
Dynamic linking
Sharing external program
External called program cannot be shared. It requires duplicate copies of programs in memory.
Dynamic linking lets several programs use a single copy of an executable module.
File size
Statically linked files are significantly larger in size because external programs are built into the executable files.
Dynamic linking significantly reduces the size of executable programs because it uses only one copy of shared library
Easiness to update
In static linking if any of the external programs has changed then they have to be recompiled and re-linked again else the changes won't reflect in existing executable file.
In dynamic linking individual shared modules and bug fixes can be updated and recompiled.
Speed
Programs that use statically-linked libraries are usually faster than those that use shared libraries.
Programs that use shared libraries are usually slower than those that use statically-linked libraries.
Compati-bility
In statically-linked programs, all code is contained in a single executable module. Therefore, they never run into compatibility issues.
Dynamically linked programs are dependent on having a compatible library. If a library is changed, applications might have to be reworked to be made compatible with the new version of the library.

Advantages:

Static linking
Dynamic linking
      Static linking is efficient at run time.
      It has less system calls.
      Static linking can make binaries easier to distribute to diverse user environment.
      It let the code run in very limited environments.
      Dynamic linking is more flexible.
      It is more efficient in resource utilization, taking less memory space, cache space and disk space.
      It is easy to update and fix the bugs.
Static linking
Dynamic linking
Source:




Friday, 19 September 2014

SPO600: Lab2

Brief description: 
Wrote a simple c program to display “Hello World!”, and  compiled it using command “gcc –g –O0 – fno-builtin”. Then using command “objdump” with options –f, -d, -s, --source to display the information of the output file.
And then do the following changes to see the changes and difference in the results.

5) Move the printf() call to a separate function named output(), and call that function from main().

Original output file: a.out
Output file after change: hello_all5.out

Before change:
When run the command: 

It only has main section for the source code:


After change
When run the command: 
It shows following:



6) Remove -O0 and add -O3 to the gcc options. Note and explain the difference in the compiled code.
-O3 is to optimize more for code size and execution time. It reduces the execution time, but increase the memory usage and compile time.

Output file before change: hello_all5.out
Output file after change: hello_all6.out

I use “time” command to check the execution time of above files, and get following result.


hello_all6.out is complied with the option –O3. It supposes to have less execution time. However, it takes much longer in real time than the previous one. Well, it does take less time in sys time.

I also compared the sizes of the output files with –O0 and –O3. The hello_all5.out, which is compiled with –O0, has smaller size than hell0_all6.out, being compiled with –O3. Apparently, compiling file with option –O3 does not reduce the file size. Instead, it increases the file size.


Following sreenshots are the result by running “objdump –source” command for both of the files.

Comparing the two results, I found:
1       --- The sequences of <main> section and <output> section in both results are different. For the output file hello_all5.out, being compiled with –O0 option, <main> section appears after <frame-dummy> section. And <output> section is after <main> section. By contrast, for the output file hello_all6.out, being compiled with –O3 option, <main> section appears right after the line “ Disassembly of section .text”. And <output> section still appears after <frame-dummy> section.

2   ---The contents of <main> section and <output> section are different for both results. For the output file hello_all6.out, the contents of both <main> section and <output> section are shorter than those in the result of hello_all5.out. It has 6 actions in <main> section of hello_all5.out and 9 actions in <output> section of hello_all5.out. By contrast, there are only 3 actions in <main> section of hello_all6.out and 4 actions in <output> section of hello_all6.out.











When I ran “objdump –s” for both files, I found more differences.
Contents of section .debug_line and contents of section .debug_str are shorter than the result of hello_all6.out. Moreover, the result generate by hello_all6.out has one more section – contents of section .debug_ranges.
Contents of section. debug_str generated by hello_all5.out

It is good to know that using different compiling options, the compiler compiles the program in different ways. Each option serves the different purposes. Accordingly, the assembler contents of each object files are different as well.

Using “objdump” command, it is good to see the assembler contents of the object file. It’s a good start to learn the assembly language. However, I still don’t fully understand what the assembler contents stand for. With learning more assembly language, I think it won’t be a problem for me anymore.

Sunday, 14 September 2014

OSD600/DPS909: First Glance at Node.js

I firstly heard about the Node.js from my friend, who worked at CDOT @ Seneca. He told me Node.js is a server side Javascript and his main job was working with Node.js. Then I got the frist impression about Node.js, which is a JavaScript library in server side.

Now I take the course Open Source Development. Node.js is one of the topics in the list of the Case Study. To be honest, Node.js is the only topic that I have heard about. It is an opportunity for me to dig out what is Node.js and how it works on the server.

This is what is Node.js on Wikipedia:
“Node.js is a cross-platform runtime environment for server-side and networking applications. Node.js applications are written in JavaScript, and can be run within the Node.js runtime on OS XMicrosoft Windows and Linux with no changes.

And according to the Wikipedia, Node.js is gaining popularity and is adopted as a high-performance server-side platform by Groupon, SAP, LinkedIn, Microsoft, Yahoo!, Walmart, and PayPal.

In nodejs.org, Node.js is described as follows:
“Node.js® is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.”

Here it provides more information about Node.js. It describes that Node.js uses an even-driven, non-blocking I/O model, which I will find out what the meaning of these words are later.

The first project I work on in OSD600 is to enable Filter have “du” functionality as Unix does. Filer is a POSIX-like file system interface for node.js and browser-based JavaScript. It seems that my research on Node.js will be helpful on my first project. And I may keep work on Filer in my later projects, or may switch to MakeDrive (MakeDrive is a JavaScript library and server (node.js) that provides an offline-first, always available, syncing filesystem for the web.), which I am interest in.


Source: 

SPO600: Procedure of bug fixing on open source software - Bug 1041788@ Bugzilla@Mozilla

I used advance search at Bugzilla@Mozilla with following criteria: Status-Resolved, Product-Firefox, Resolution-Fixed, and Classification-Client Software to find a resolved bug. It showed me a listed bug met above-mentioned criteria. I sorted them by date of Changed and found Bug 1041788 - Unable to close tab after slow script warning at chrome://browser/content/tabbrowser.xml:1989. The reason why I chose this bug is because this bug is unusual and there were a lot of conversations about it. It is easy for readers to understand what’s going on during the process.

This bug was reported on 2014-07-21 and was modified on 2014-08-06. There were 11 users involved in the comments. The user, bull500, who reported the bug, has less experience in the community. And the user, Mike Conley, is the one who was assigned to solve the bug and has a lot of experience in this community. And there were some other experience users and QA (Paul Silaghi) contact in the reviews.

During the bug fix procedure, user bull500 reported a bug, which is unable to close the table after opening a large number of tags and showing a slow script warning, together with more details like the OS / software version, steps to reproduce and the results. Then Paul and Mike tried to reproduce it. Paul failed. But Mike did and found where cause the bug. And then Mike asked another experience user’s (Tim’s) opinion about this bug. Tim suggested a patch for this bug and asked the bull500’s and Mike’s feedback. They did not have the issue after install the patch. Thus the bug is resolved.
The whole process for resolving this bug took 16 days. Bull500 got the response the day after he reported the bug. When he reported that he met the same issue 5 days after, he received immediate response. Then they actively discussed the issue and found out where it came from. 9 days after the bug was reported, the solution came up. In the following days, they discussed how the solutions worked.

Moreover, I also browsed the Bugzilla@Eclipse. It has the exact same procedure as Bugzilla@Mozilla has.

After reviewing the procedure of above bug, I have some ideas on how the open source software works on resolving the bugs. However, I am still confused about who assigns the tasks or how developers take the task. 

Sunday, 7 September 2014

OSD600/DPS909: Opened license vs. Closed license/EULAs

I read the BSD 2-Clause License and Skype’s license. I was surprised following facts about these two types of license. First of all, the length of the licenses are quite different. BSD 2-Clause license is very short. On the contrary, Skype’s license is quite long. Second, the content of the license have the big difference as well. BSD 2-Clause license only contains the license template and conditions to redistribute and use. Skype license have much more content, containing 22 sections. Each section is about the different aspects, such as charges, payments, and license, use of the software and products and Skype and so on. It puts as much as possible related content into the license. Third, the tone and the words they used in these two license are different. BSD 2-Clause license is not strictly like a business document. The Skype license is a formal business document. It has everything that a formal business document should have.