User:Petr Bouianov/FSOSS 2013

From CDOT Wiki
< User:Petr Bouianov
Revision as of 00:34, 9 November 2013 by Petr Bouianov (talk | contribs) (Created page with '==Introduction== While originally I was not excited about attending FSOSS as it was held during our one week off, but listening to the lectures helped me achieve a better und…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Introduction

While originally I was not excited about attending FSOSS as it was held during our one week off, but listening to the lectures helped me achieve a better understanding of open source. Having previously thought that open source meant the developers would not be earning a significant amount and for the most part volunteer their time to the projects I was pleasantly surprised. The two talks that really made me think about open source and how it affects the world as a whole were the Project.JS talk by Dylan Segna and Andrei Kopytov and the ARM talk by Andrew Greene and Christopher Markieta. While the technologies are not exactly related, considering one deals with CPU production while the other deals with software content delivery, they both accomplish similar results by exposing more people to content they would otherwise not have access to.


Processing.JS

This talk was presented by Dylan Segna and Andrei Kopytov, both prior students that are currently working in CDOT. The talk began with Dylan Segna covering what Processing.JS is and what it is capable of. By allowing the developer to create 2d or 3d graphics using javascript it greatly expands the amount of content potentially available to users. With Javascript support in browsers being almost universal, all users would be able to experience the same content without any limitations or alterations.

Talk

The demonstrated project using Processing.JS was a simple game designed to teach math, fractions specifically, to younger audiences. While the design itself wasn’t spectacularly innovative, it brought to my attention the numerous possibilities of not only delivering flash content to users with device that may not support flash, but also using the browser as a gaming platform itself, as Processing.JS supports input processing from external devices, such as mice or keyboards. By being able to work with other web elements such as HTML, CSS, jQuery and JavaScript, Processing.JS offers an immense toolkit at the developers’ disposal, able to adapt to almost any type of content being created, from static information to video and game delivery. The main points for their talk was how Processing.JS makes 2d and 3d graphics generation easy, letting virtually anyone with an interesting idea be able to deliver it to a large audience without any restrictions on the user end.

Problems

There were however problems mentioned concerning Processing.JS. Accessing JavaScript from Processing.JS code proves to be a challenge as it is not exposed globally, requiring you to access it using the Processing.instances property. Another large problem that was mentioned was the difficulty in debugging Processing.JS code. Not only does attempting to debug in browser produce references to line numbers that don’t exist within the project, but exporting to a Processing IDE means having to separate your JavaScript from your Processing code to be able to start the debugging.

Thoughts

The Processing.JS talk, lasting only half an hour was fairly short, but the information presented definitely gave me a lot to think about. Being able to develop universally-accessible content by working directly with code and not requiring a lot of graphics experience can have huge impacts on the Internet and the global community as a whole. With developers not being restricted by hardware or even software (other than the browser having to support JavaScript), a lot of new and innovative ideas are able to enter the field without having to pass the cost-hurdle typically associated with game / graphics development. With Processing.JS being open source, any newcomer is able to dive right in and create something of their own in less time without draining their wallet.


ARM Processors

Presented by Andrew Greene and Christopher Markieta, the ARM talk discussed the ARM processor architecture, its advantages and competition. Developed in Britain, the ARM architecture for CPUs is an instruction set based on the RISC (Reduced instruction set computing) architecture as compared to Intel’s CISC (Complex instruction set computing) approach. Originally developed for educational purposes, ARM processors have quickly taken off and are widely used in today’s mobile devices.

Talk

By using the RISC approach, ARM processors are able to focus on efficiency and speed while the CISC approach prefers maximizing processor output even at the cost of extra power consumption. Being better suited for a mobile environment not requiring complex calculations, ARM chips feature not only a reduced size die, but also a faster development time resulting in cheaper CPUs. Andrew Greene then talked about the MIPS to WATTS ratio used to measure a CPU’s performance. The MIPS (Million Instructions Per Sercond) to WATTS ratio determines just how many instructions the particular CPU can process for a given amount of power. With ARM chips focusing on a simpler and more efficient solution, their resulting performance ratio makes for a popular selling point.

Christopher Markieta then went on to explain ARM’s entry into 64-bit architectures with their new ARM processor designs, the Cortex A57 and A53. By providing 64-bit alternatives, ARM would not only be producing even more efficient chips but be able to enter the server market, offering plain boards with up to 4 nodes, with each node containing 4 cores. While these server setups would not be as powerful as one with CISC based architecture chips, it would be significantly more efficient. Christopher went on to talk about the new processors having a new big.LITTLE configuration. This would allow the processors to dynamically change between a higher power CPU for foreground tasks and a lower power CPU for background tasks. This would allow the new ARM processors to be the most efficient 64 bit platforms due to their innate power efficiency and computing power.

Thoughts

With ARM Processors dominating the mobile device market, the future for ARM is a bright one. Since ARM licenses the architecture, they do not directly compete with any manufacturers. With prices being kept low, ARM-powered devices are able to be produced for a very low cost. Due to the increased availability of mobile computing, even the most rural areas are slowly becoming Internet aware. With high powered mobile devices becoming more and more commonly available, these devices are slowly transforming from merely letting individuals consume content to create it with ease.


Analysis

With both of the talks being done by Seneca students/graduates, it could be easily seen that they were excited about their topics of presentation.

Both presentations talked about simplification and efficiency moving forward. With Processing.JS it was the ease of generating new content to be delivered worldwide, while with ARM it was the efficiency of the new smaller designs, allowing for their devices to expand the market. While these two parts of the open source community are not directly related, indirectly they are responsible for growing the online community as a whole. By providing low-cost entry for developers and users alike the financial risks for development are significantly reduced.

After listening to both of the talks, it can easily be said that the open source community is working hard to ensure that there are available tools for both users and developers alike to consume and create media as they wish. By providing a very low barrier to entry, this not only ensures a higher global online presence, but new innovative ideas coming to light more frequently.

All of the speakers had the same view on the open source community as an enormous ever-growing one. The developers working on open source projects aren’t simply working on web content but a lot of various interesting projects. With the community being so open to not only providing information but peer review, it was definitely described as an ideal way to further various technologies and directly improve the user and developer experience.


Conclusion

Overall, FSOSS was a much better experience than I had anticipated. Not having high hopes going in, especially since it was during our week off, I was pleasantly surprised at not only the selection of talk topics but by how much they made me think. While there was not a vast amount of information presented, the ideas that were brought up with Processing.JS enabling the web for more people, combined with cheaper ARM-based mobile devices lowering the cost for users to have an internet presence really made me realize how the open source community is helping advance global interconnectivity. Not only is it improving the quality of user experience but the user market itself is being expanded with new individuals becoming web-aware.

This symposium definitely showed me a different side to open source than I had previously seen and made me realize that open source developers do in fact earn good money for open sourced work. While I’m still not certain that I want a future in open source development, it’s nice to know that there are a lot of non-frontend developer opportunities available with it comes to open source.

Being a more enjoyable experience than expected, I can with near-certainty say that I will be attending next year. I just hope there’s a bit larger variety to the presentations, even if it means scheduling them for 30minute intervals, as a fair number of the talks ended at around that mark.