Thursday, April 27, 2017

Optimizing GC settings for Sakai

The last weeks I've been playing with Java Virtual Machine (JVM) optimization. I’ve always believed that JVM tuning is somehow mystical and dark magic. You can’t be completely sure that what you change will enhance the current configuration or it will sink your servers. JVM Parametrization is something I try to touch as little as I can, but this time I had to roll up my sleeves and do some experimentation in the production servers.

We recently upgraded our Sakai [1] platform to the 11.3 version. It was great because we have some new interesting features and an awesome new responsive interface. In this case the version of Java was upgraded to 1.8 too, so some of the parameters we had set up were not working anymore.

Our first move was removing all non working parameters and just define the size of the Old Gen Memory. We had the following configuration in our servers.

       JAVA_OPTS="-server -d64 -Djava.awt.headless=true 
           -Xms6g -Xmx6g
           -XX:+UseConcMarkSweepGC
           -XX:+CMSParallelRemarkEnabled 
           -Duser.language=ca 
           -Duser.region=ES -Dhttp.agent=Sakai
           -Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false 
           -Dsun.lang.ClassLoader.allowArraySyntax=true
           -Dcom.sun.management.jmxremote"

We are running Sakai in a set of servers that have 8GB of RAM. So we defined 6GB for the CMS Old Generation Memory (Tenured). We obtained a distribution like this:

NON_HEAP: Around 750MB committed.
HEAP:
  • PAR Eden + Par Survivor (Young generation) : 300MB Committed aprox.
  • CMS Old Gen: 5.5 GB Committed aprox.
That configuration would guarantee around 1GB for the SO, so initially it seemed a good configuration. As you can see we had two extra parameters to handle the Java Garbage Collector processes:

-XX:+UseConcMarkSweepGC: Enables the use of the CMS (concurrent mark sweep) garbage collector for the old generation.

-XX:+CMSParallelRemarkEnabled: This option means that remarking is done in parallel to program execution. It’s a good option if your server has many cores.

Out of Memory and JVM Pauses

This setup seemed to work for us. The Garbage Collector calls were fired automatically by the JVM algorithm when it was needed, but quickly we started to see some memory problems. Sometimes, when a server memory occupancy was high, it started to behave dramatically wrong. A list of random errors appeared in the logs.

We use the project PSI Probe [2] to monitor JVM behavior. It recollects information about threads, memory, CPU, data, etc. So we found out that the Garbage Collector mechanism didn’t free the CMS Old Gen memory when the errors were produced. In that moment, we tried to restart tomcat we couldn’t. We received the message that we couldn't stop the server because Heap was out of memory.

Capture from the memory use screen of PSI Probe

After analyzing the logs and watching how much random it was (it happened in different times and different servers), we figured out that all the errors were produced because the servers were running out of memory. So we started to monitor the GC process in more detail. So we added the following lines to the JAVA_OPTS:

     -Xloggc:/home/sakai/logs/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps
     -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=2M

Those lines allowed us to get the GC operations in a log file in order to analyze them. We used the program called GCViwewer to visualize the data. It’s an open source project from Tagtraum [3]. It shows a lot of information about memory and Garbage Collector processes from the logs.

With this application we saw that there was a moment when the memory usage was near to the limit, that pauses in garbage collector increased. There were a lot of them and times up to 30 seconds. That produced that server stopped responding, of course.

So our first thought was to try to prevent from reaching a critical level of occupancy that make it unstable. Our first attempt was to use two options to force Garbage Collector to free memory:

      -Dsun.rmi.dgc.client.gcInterval=3600000
      -Dsun.rmi.dgc.server.gcInterval=3600000

It uses RMI to force the memory clean every hour, but it didn’t work for us, the CMS Old Gen kept increasing. So we tried another parameters. This time worked.

      -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75


-XX:+UseCMSInitiatingOccupancyOnly: Enables the use of the occupancy value as the only criterion for initiating the CMS collector.

-XX:CMSInitiatingOccupancyFraction: Sets the percentage of the old generation occupancy (0 to 100) at which to start a CMS collection cycle.

By the other hand, we also tried set up to more parameters in order to change the mark policy in the garbage collector attending to some recommendations found in Anatoliy Sokolenko's blog post [4].

-XX:+CMSScavengeBeforeRemark: It enables scavenging attempts before the CMS remark step.

We had been running these parameters for some days and the new behavior seems nice. It prevents to saturate the servers, and the GC pauses done when a big CMS Old Gen clean is performed are very short, they are under a second.

Campture from GCViewer. It shows the time spent in a Full GC performed in CMS Old Gen 

As you can observe, what I demonstrated writing this post is that I’m not good at Java tuning. This post intention was to show a little bit the process we followed. I’m looking for comments about wrong assumptions I made in this post (I’m sure I did), and find better ways to do it. So I’ll really appreciate your comments and suggestions in order to improve it.

References:
[1] Sakai Project: https://sakaiproject.org/
[2] PSI Probe project: https://github.com/psi-probe/psi-probe
[3] Tagtraum, CGViewer web page: http://www.tagtraum.com/gcviewer.html
[4]  Anatoliy Sokolenko's blog post: http://blog.sokolenko.me/2014/11/javavm-options-production.html

Other sources I used in my research process:
http://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html


Friday, June 17, 2016

LooWID

More than two years ago Juanjo Meroño and I started the project LooWID (www.loowid.com). As many of you already know it's an open source videoconference platform based on WebRTC that allow users to join in small rooms and share webcam, screen and audio and share files directly from browser to browser. Eduardo Rey has been also involved designing and implementing the interface and that helped a lot to have a nice platform.



It has been an good experience since we moved to make it open source instead of offering just as a service. Before we opened the project we tried to get feedback from friends and family and there were some interesting results. We asked them to fill a survey to guess what business model would be the best for LooWID. We were concerned that we should offer it as free but we needed to find a way to fund the infrastructure, and perhaps if the project grows contract an small team to work on it.  We were asking about the way the project must be funded. There were some options like, mandatory pay for use, advertisements, or donations. Around  90% of answers were 'donations' but in the following questions asking if they would donate to use a service like that, in small donations regularly or a bigger one (10€ - 15€) once, they all absolutely answer 'NO'.

So our conclusion was that people won't pay for a service that is offered by others for free, so we decided to change the direction and go to a fully opened to try to involve more people in the project in order to keep it alive. We knew that our motivation in the project would end if there was no response, so opening we wanted to get more people involved looking for more feedback to get energy to continue.
In some way we got it because we got a lot of nice words in the project, a lot of translations of platforms like German, Russian, Hungarian, other helped promoting the app with their tweets, and also Jose Rabal helped us with security, so the project got some good stuff, but the same main developers  that were at the beginning.




Now 1.4.1 release is out and included some interesting things like oembed integration in chat, but we realized that we can not afford big challenges like the one we were working on: the WebRTC relay. We were looking a way to grow up the numbers of users that were consuming streams in a LooWID room creating a relay in each  client in order to build multiple tree structures that would guarantee thousands of users getting video and audio running a simple 1GB Ram server. That sounds great, isn't it? But reality is that the time we can invest after work is too small and that makes it impossible to assume bigger goals like that :-(


So we've frozen the project for a while, trying to rest and get energy again. I hope that soon we'll start working again in some new features.

Monday, August 10, 2015

Writing a game

Today I got the first review for a game I wrote at www.contenidoandroid.com . Build this game was an excuse to learn all about publishing and promote an app.

The first phase of "The Cursed Windmill" was try to build and publish the game spending less than 25 hours. You'll see for the results that an ugly but addictive game can be written, but if you want something more elaborated I think that lot of time should be spent on graphics design, UX, etc...
I spend around 12 hours to write the code and test with some friends to give the game the touch of playability needed to be addictive. The rest of hours were spent on drawing the definitive sprites, selecting sounds, advertisement accounts, publishing staff (take screenshots,  write descriptions, ...).

Now I'm learning myself how to promote it and at the moment it's very discouraging. I realized that if I had a better game things would be easy, but it's not the goal. For now is only available for android > 4.4, but I'm thinking about porting to iOS.  Here is the dilema because I had to spend some money to get an iOS device and $99 developer account.

You can take a look at:
 link-google-play



Friday, September 12, 2014

My history with Sakai

Tomorrow,  September 13 is the 10th anniversary of Sakai at UdL. We ran into production with Sakai 1.0 rc2 in University of Lleida in 2004. Quite an achievement and an adventure that has lasted 10 years and hopefully will be able to last many more. Perhaps it was a little rushed but luckily it worked out fine.

I will not tell you the history of the UdL and Sakai, I'll tell you what I know and I feel about my history of Sakai, that is directly related with the UdL. To get the full version of the UdL we should have a lot of people points of view. 

So I will start before Sakai, we have to go back few months ago, In January 2004 I applied for a contest for a temporary position at UdL for a project to provide to the University an open source LMS system. The tests were based on knowledge of programming in Java servlets, jsp,  and knowledge of eLearning. The IT service management was looking for a Java developer profile as they were evaluating the coursework platform from Stanford. They wanted developers to make improvements and adapt it to the UdL needs. At that time, UdL ran WebCT and wanted to replace it to an open source one in the context of free software migration across all the University.


I had coded a little Java for my final degree project, but I didn’t know anything about servlets or jsp, so I bought a book of J2SE and I studied some days before and took the test with many other they wanted that position. I passed the tests, and I was lucky to win the programmer position  on the “Virtual Campus” team  with 2 other guys.  David Barroso was already the analyst programmer of the team, it mean he was my direct boss (a really good one).  

We ran a pilot with a few subjects with the Computer Science degree in Coursework, and it looked to be well adapted to our needs. Also we were looking closely the LMS CHEF. When founding universities of Sakai announced that they join to create an LMS based on the work of those LMS the decision was taken.

When Sakai started lacked many features that we thought necessary, like a gradebook, a robust tool for assignment  and assessment,  but still seemed a platform with great potential. It had a big funding and support from the best universities in the world, it was enough for us to get into the project. UdL intention  with Sakai was to go beyond the capabilities of LMS and use it as a virtual space for the whole university. Provide in a future a set of community sites, and use for our intranet as well as development framework for our applications.

So we started working on it, translating the interface of Sakai to catalan and adapting the institutional image. We created sites for the subjects of the studies of the center Escola Politècnica Superior of the UdL. The September 13, 2004 the platform was in production.

Sakai 1.0rc2 translated in catalan and customized for UdL





During the process, we realized that the need of translating all the platform in each version would be very expensive, and the internationalization process appeared not to be one of the imminent community efforts, so the IT service manager Carles Mateu and David Barroso decided to offer support to internationalize Sakai. The idea was to provide a mechanism to translate Sakai easily without having to modify the source code every time Sakai released new version. It was an essential feature for us and it must be done in order to continue with Sakai project.  David contacted with the Sakai project chief director Dr. +Charles Severance and offered our help to internationalize whole Sakai.

Chuck was glad about our offering and the work started soon. Beth Kirschner was the person in charge of managing our work and sync with Sakai code. I was lucky to have the responsibility to manage the task on our part. First thing I did was a PoC with a tool. I extracted all the properties of a VM tool to a properties file, and then it was loaded with Java Properties objects. The PoC worked well but Beth encouraged me to use ResourceBundles instead of simple Properties class. I wrote another PoC with this guy and it worked great. From that point then began the tedious task of going all the code to do this. The result was  “tlang” and “rb” objects everywhere. That took between 2-3 months 3 people. We also used that process to write the catalan translation. We used a Forge instance installed at UdL to synchronize these efforts. We implement the changes there for Sakai 1.5 and when a tool was completely internationalized I notified in order for Beth to apply  the changes in the project main Sakai branch.

Although we worked on a 1.5, i18n changes were released in version the Sakai 2.0. For us it was a success because it ensured that we could continue using this platform for longer.  When version 2.0 came out we upgraded from our 1.0rc2. Only one word comes to my mind when I remember that upgrade: PAIN. We had a very little documentation and we had to look for the code for every error we found. We had to make a preliminary migration to 1.5, running scripts and processes on the Sakai startup and then upgrade to 2.0. The migration process failed on all sides but with a lot of efforts finally we went ahead with it.

Once we had the platform upgraded, we started to organize our virtual campus and university-wide LMS as intranet, creating sites for specific areas and services and facilitating access to people depending on the profile had in the LDAP. We also created the sites for the rest of degrees of our University. 

From that moment our relationship with Sakai has not been so hard. Everything went
better. Next version we ran was 2.2, we upgrade it on 2006. By then we  were granted with a Mellon Foundation award for our internationalization effort in Sakai. It’s one of the things that I’m prouder of my career, but it was embittered because the prize reward was finally not claimed. I did not find out until a couple of years after that happened.  The money of the award should be spent developing something interesting related on education, so in order to receive the award was needed to make a project proposal detailing how we would spend the $50K. UdL’s idea was to create a system to translate Sakai’s string bundles easily like some tools did it by then (poedit, ...). The IT service direction thought it was a better that the project wasn’t done by the same team that customized Sakai at UdL and internationalized (I guess they had other priorities in mind for us), but that development should be done by people of a Computer Science research group of the UdL. I do not know why they didn’t do the project or the proposal to get the award money, but nowadays I already don’t mind. [Some light here ... see the comments]

Around that time our team started working on a new project where were implied a large number of catalan Universities,  The campus project. It initially began as a proposal to create an open source LMS from scratch to be used by them. The project was lead by Open University of Catalunya, UOC. The UdL IT service direction board and David Barroso expressed their disagreement to spend the 2M€ finance such a project having already open source LMSs like Moodle and Sakai in which they could invest that money. The project changed direction and they tried to do something involved with existing LMSs, so they decided to create a set of tools that would use an OKI OSID middleware implemented for Moodle and Sakai.  Although running external tools in the context of an LMS using standards and provide an WS BUS to interact with the LMSs API was a good idea  I didn’t like how wanted to use a double level OKI OSID layer to interact with both LMS APIs. I thought that was too much complex and hard to be maintained.


OKI BUS

We upgraded Sakai again in 2007 to version 2.4 (that release gave us a lot of headaches). I also won the position as analyst programmer in the Virtual Campus team that David Barroso vacated when he won the internal projects manager position. The selection process left me quite exhausted by the long rounds of tests that were delayed in time and the competition with nearly 30 colleagues the made ​​harder the effort to get the best grades to win the position. By then, the IT service direction board, Carles Mateu and Cesar Fernandez, resigned because they had discrepancies with the main university direction board about how to apply the free software migration in the UdL. It was a shame because from then we have experienced a strong running back in free software policies and has worsened the situation of the entire IT service.

In September of that year, after the job competition finished and being chosen to take it, I went to spend a couple of weeks at the University of Michigan. My mission there was to work on the IMS-TI protocol with Dr. Chuck to see if we could use this standard as part of the Campus project. These two weeks there were very helpful. We did several examples implementing IMS-TI with OSID.  I spent a good time with Chuck and Beth in Ann Arbor during my visit to the United States, but I really remember fondly that trip because a few days before I went to Michigan, I got married in Las Vegas and I spent our honeymoon in New York.

Once back to Lleida, I insisted several times to the Campus Project architects on changing standards for the registration and launch apps to IMS-TI. Although people lead the campus project loved that idea they had already deep in mind the architecture they wanted to use, so we went with the original idea.

Several of the partner universities in the project created tools for that system and the UdL picked up the responsibility to create OSID implementations for Sakai as well as a tools to register and launch remote tools within Sakai as if they were their own . Although it was very tedious to implement OSID, it allowed me to get a fairly deep knowledge of all systems that later became the Kernel of Sakai. Unfortunately, the campus project was not used, but parallel IMS-LTI could end up winning.

Already on April 2008, taking advantage of a visit by Dr. Chuck to Barcelona for an attendance at a conference organized by the University Ramon Llull, we had the first meeting of Spanish Universities that had or thought to run Sakai. 

I went with the new director of IT services of the UdL, Carles Fornós. He was there the first time I saw Sakaigress, furry pink Sakai’s mascot. Dr. Chuck was carrying her. I explained to my boss that these teddies were given as a reward for participation in the community, and the first thing he told me was, "we have to get one." During the meeting the representatives of both universities that had running sakai, UPV as we, explained a bit of the experience we had with Sakai and resolved doubts that were raised to us from other universities. At the end of the meeting, everyone's surprise, Dr. Chuck wanted to give to us (UdL) the Sakaigress. He did it for two reasons that told me later. First, because we had been working hard in the community to internationalization and helping to promote standards like the IMS-TI with our work in  the implementation of the campus project, on the other hand he gave it to silence some voices of doubt that came out in the environment of our university about choosing Sakai instead of Moodle, wanting to reaffirm the commitment of the community with our University. 

Sakaigress

During that meeting also came the idea of making the first workshop of Sakai. A way to show people how to install, make tools and discuss about the platform. When my boss heard it whispered to me that we should offer volunteers to organize it, so I offered to organize.

In that meeting I also met the man who was in charge of implementing Sakai in the Valencian International University (VIU). We talked with him ​​about the OKI OSID implementation with his technical staff by mail some days before. They were very interested  in this use case. It was not even a month that  the team that prepared the specifications for implementation of Sakai to VIU came to Lleida to visit us. Before I tried to convince Carles Fornós to offer our services to the VIU. The customization of Sakai on other university for us would have been very simple and it was an opportunity to provide to the UdL more funds to keep developers. Carles did not seem a good idea, so I did not even offered. 
Moreover, when the UdL rejected to offer services as an institution, I considered doing at personal level with the help some co-workers. At first it seemed like a good idea for the responsibles for the technical office of the VIU, but when the moment arrived to go ahead with the collaboration, the UdL main direction board showed their disapproval (no prohibition), which made ​​us pull back because the risk of losing our jobs in the UdL if anything went wrong. Finally, the work was made by Pentec-Setival (Samoo). They did a great job. Perhaps it was the best result for the Spanish Sakai community because we got  a commercial  provider to support  Sakai.

In June 2008 we held the first Sakai workshop. It was a very pleasant experience, where the colleagues from UPV Raul Mengod and David Roldan, along with some staff of the Institute of Education Science of UdL (ICE) helped me to give some talks to other universities that were evaluating Sakai as their LMS.  

Soon after, in February of 2009, it was organized the second Sakai event in Santiago de Compostela. There, the group of the S2U was consolidated. By then, the UPNA was about to run on production migrating the contents of its old LMS WEBCT. In that meeting I showed how to develop tools in Sakai. At UdL we had upgraded to 2.5 and also shared opinions. We suffered a lot for performance issues and crashes with 2.4, but 2.5 seemed to improve a lot.

Days later that event, UPV invited us to attend a presentation and a meeting with Michael Korcuska, Sakai Foundation executive director by then. In Valencia it was the first time I saw the preview of Sakai 3. It was sold as the new version that would replace Sakai 2, he told that perhaps community would release a 2.7 version but not a 2.8. It was expected to be on 2010.

Truth be told, I loved it, and I spent much time tinkering and learning new technologies that had behind sakai 3. I went to the workshops offered at the 2009 conference in Boston, the truth is that everything pointed to the community supported the plan to move to Sakai 3, or at least it seemed to me.

On the 3rd congress of the S2U on November 2009, I made ​​a presentation of the benefits and technology behind Sakai 3 for making people aware of the new road that faced the LMS. Unfortunately we all know what has been the real way. It passed as slowly from “being the replacement”  to “something complementary” and finally to “something totally different”.

We did some proof of concept with hybrid system between Sakai CLE i OAE, Bedework, BBB and Kaltura. The PoC was quite promising, but the shift in architecture given the poor results obtained with the technological stack chosen frustrated our plans. Currently OAE continues with another stack but this away to the idea we had in mind at first.

By then we owned a big number of tools developed for Sakai JSF and Spring-Hibernate. For us, it was a problem in the future expected platform migration process between 2 and 3. In late 2009 and early 2010 we started developing our own JS + REST framework based on Sakai to have tools implemented more neutral manner that would allow us to move between platforms in less traumatic process. Thanks to all what I learned from Sakai OAE technologies I designed what is now our tool development framework for Sakai, DataCollector. It’s a framework that allows us to link to multiple sources and types of data sources and display it as js apps inside Sakai. It uses Sakai realms as permission mechanism and lets create big apps based on templates.
Gradually we have been replacing all the tools created in JSF (poor maintainable) by these based in our framework.  Although we finally we have not moved to OAE platform, it has helped us to have a set of more flexible and maintainable apps than those written  in JSF.

In July of 2010 we upgraded to version 2.7. We were still hoping to see soon Sakai OAE as part of our ecosystem Virtual Campus. Everything seemed to fit pretty well. At the 2010. At the end of the month my first son was born, and I took a long paternity. I was not working in the UdL but I wanted to assist to the IV Spanish Sakai congress in Barcelona in November to show all the work made with the Datacollector.  I went with my wife and him, the youngest member in the S2U.

In June 2011 we had another meeting in Madrid, it was organized to show to whole S2U member how to coordinate and use JIRA in a better way to help to our contributions being incorporated in sakai trunk. Some time ago we had an arrangement to implement some functionalities together and it was difficult to get in the main code. Some universities paid to Samoo to get it implemented but UM and UdL preferred to implemented ourselves. But what I really enjoyed in that meeting is how UM had implemented Hudson in their CI process. I loved the idea and my task in the following months was refactor all our process and automatize builds, deployments and tests with jenkins and selenium. 

Looking backward I see that during the years 2010 to 2012 our involvement with the S2U and the whole Sakai community dropped considerably. I guess that our eyes were on shift to the new environment. We concentrate our efforts on having DataCollector framework developed as much as possible in order to have a valid output gate for all those tools that have been developed since 2004. In addition S2U objectives were not in line with what we had in that moment. S2U's approach focused on the internationalization. As I understand it was a mistake because there was already a part of the community focused on that and the S2U should not focus only on those issues.

In July 2013 we did the sixth and last upgrade. In the upgrade process to  2.9 we took the chance to spend some time to migrate from our users provisioning system based scripts to an implementation of Course Management. Mireia Calzada did an excellent job  preparing ETLs and helping to build an implementation based on hibernate.

We took that opportunity to open all the functionality to allow create sites by teachers and students to let them use to work together, now they have storage space for their own projects, communication tools, etc.  That gave us very good results because people feel the virtual campus more useful than previous years. Also, we allowed teachers to invite external people and organize their sites as they want. Many of the complaints we had about the Sakai platform weren’t about features not supported by Sakai but due to the restrictions imposed by us.

The previous tasks related with that upgrade allowed me to reconnect with the community collaborating  reporting and resolving bugs, participating in QA,  and contributing what my colleagues and I have translated into catalan.

During 2013 I also ventured on a personal project related with Sakai. I created together with Juanjo Meroño from Murcia a functionality to allow videocam streaming  using the Sakai’s portal chat. A desire to contribute something personal to free software and especially to Sakai motivated me to make this project. It was a pretty nice experience to work with the community again. The help of Neal Caidin and Adrian Fish was the key to integrate it to the main code Sakai.  

In November 2013, Juanjo and me presented that functionality in the VI Congress of Sakai in Madrid. The important thing about that congress was that the whole s2u recovered sinergia. I’m convinced that University of Murcia staff was the key to inspire the rest of us. If you are interested you can read my opinion of the event in a previous blog post. Now we have weekly meetings and work as a team. Resources flow gently between the needs of group members and goes pretty well.

Now I feel again that I’m part of the Sakai community and S2U. I guess that the fact of working closely with its members has allowed me to believe that Sakai has a bit me.I'm waiting when the next s2u meeting is going to celebrate, and maybe I'm gonna go with my second son born that August.

And that is a brief summary of how I remember that history, maybe something was different, or happened in a different time. Just say Thanks to UdL, Sakai project, and S2U members to make that experience so amazing. 







Monday, November 11, 2013

Thoughts about VI Spanish Sakai Madrid '13

On 7 and 8 of November, I participated in the 6th Spanish Sakai Meeting in Madrid. I always enjoy these meetings because it's time to see friends and learn a lot about what are doing the rest of Spanish Sakaigers, and why not, feel that you are not alone in a country of moodlers.



On this edition, we shared our experiences, fun, and a lot of commitment to the Sakai community. Probably, it was the most productive Spanish meeting I ever assisted, because we got a lot of synergy between the members.


This meeting was highly supported from Sakai Community and Apereo Foundation with the presence of Dr. +Ian Dolphin  (Executive Director of the Apereo Foundation) y Dr. +Neal Caidin   (Sakai Community Coordinator). They gave us their vision about how the community is going on. They reflected the importance of the engagement of S2U (renamed to Spanish Sakai Users) with Sakai Community and they encouraged us to keep working and being involved more and more.

During the presentations we could see a lot of interesting things like a list of new features that will appear on Sakai 10 by +Neal Caidin. A new recommendations system that use LTI by Angel Ruiz y Alberto Corbí from UNIR (a new member of S2U).  xAPI LMS integration thoughts by Samoo,   and many others that make that session very exciting.

Also, I had time to do some presentations about Video chat on Sakai made with +Juan Jose Meroño Sánchez , the UdL 2.9 migration experience, and a funny session named "Let’s be honest, things that we dislike about Sakai". In that session we played with the "Sincerity ball". People were throwing the ball to each other and talking about their concerns about Sakai, and discovered which things of Sakai we would like to change. I consider the session was successful because I got what I was looking for, a constructive session with a lot of ideas to improve Sakai, and a way to start to work closer with the Sakai Community.

The most common concerns were focused on the interface and usability, most notably on the oldest tools that don't incorporate functions or components that we are accustomed to seeing today. Other functions people miss were:  The way the information is imported or exported; how notifications are managed depending the tool; Or the state of development of some core tools. All these things must be analysed and dive on it to understand why people feel these things.  

But if I have to choose one session, I choose the talk about "How to contribute" by UM members +Juan Jose Meroño Sánchez and +Jose Mariano Luján. They explained to us their experience collaborating with Sakai Community, contributing on solving bugs and adding new features. That opens a door to all the members of the S2U to do things better in terms of collaboration, and it gives a comfortable way to contribute our efforts.

You can find the videos of all the sessions here:
First part
Second part

The last day, we focused on organize ourselves and we formed teams to work on different topics. These groups will work on Jira Triage, i18n ,general bugs and features, and security.

I had to mention the extraordinary work made by the S2U representation members +Diego del Blanco Orobitg  and +Jose Mariano Lujan organizing this event. Also thanks the work of Universidad Complutense de Madrid and all the sponsors.

It feels like S2U gears turning again, and it seems to work better than ever. Perhaps the leadership role that Murcia University has assumed has encouraged others to be more proactive with the Sakai community and each other. I believe that in a short time the group's work will pay off



Wednesday, July 31, 2013

Sakai Video Plugin Test Fest

Finally we got an stable version of Sakai video chat plugin based on WebRTC . We had to solve some details to work in a cluster environment but finally we can offer you a release candidate.

Now, our intention is to know if that feature can be included on the main Sakai code, for this reason we prepared a TEST fest to achieve to purposes:

- Get any bug we missed.
-  Enable the Sakai community to test that functionality. That would help Sakai CLE members to decide if it would be interesting to put in the trunk on not.

We prepared a Sakai site at University of Murcia's QA server with the instruction to complete a test of functionalities of video calls.

If you want to participate, you must create an account at here, and join to the Test Video Chat Test site. There, you will find the instructions we wrote there. You will see that in that site you will find a sign-up tool to coordinate the availability of people to test each other.

Monday, July 15, 2013

Etherpad in Sakai CLE

Ethepad is an interesting tool to have in a LMS. It allows to build collaborative documents in real time. If you are teacher and you are looking for a way to make your students work together and see their work step by step, probably that tool is for you.

Today, in this post, I'm gonna tell you how to configure Etherpad-lite in a server join with a LTI connector. That will allow you to create pads into your Sakai sites.

Install Etherpad-lite and the LTI connector on Centos 6.4


For this job you will need to install two services. First is the Etherpad-lite it self. It's powered by node.js technology and should be installed too.

The second service is the LTI connector. It will handle the LTI requests from a LMS like Sakai, Moodle, Canvas, ... and will launch Etherpad tool in your LMS site.

Installing Etherpad-lite


First of all, as root, you should install node.js and all dependencies needed by the two services. So, log in as root and install the needed packages:
yum -y groupinstall "Development Tools" yum -y install wget gzip subversion git-core curl python openssl-devel unzip java-1.6.0-openjdk.x86_64 ant
Then download node.js sources and compile them. wget http://nodejs.org/dist/v0.10.13/node-v0.10.13.tar.gz
tar xvfz node-v0.10.13.tar.gz
cd node-v0.10.13
./configure
make

That step can spend a lot of time. When finished install all on the system. make install
Now you can download etherpad-lite and configure it. First, for sercurity reasons, create a new user and log in. adduser -m etherpad
su - etherpad

Download the etherpad-lite code and update it. git clone git://github.com/ether/etherpad-lite.git
cd etherpad-lite
git pull origin

Copy the configuration template and set it up. cp settings.json.template settings.json
Modify the following parts.
"ip": "your.host.name",
...
"editOnly" : true,
...
"requireSession" : true,

Uncomment "users" block and change passwords. "users": {
  "admin": {
    "password": "changeme1",
    "is_admin": true
  },
   "user
...


Now you can start the etherpad-lite for first time. Execute it from console. It will take some time on the first start.
bin/run.sh
You will notice that a new file exists 'APIKEY.txt'. That contains a key that let external applications connect with etherpad API. You will need that value later to allow LTI connector use the API to create and edit pads.
Note: If you get an error on startup, update again code again and start it.

Later, when all are configured and all dependencies installed, you can execute it as a background service if you want. nohup bin/run.sh &

Install Basic LTI Connector for etherpad

It's time to install the LTI connector. We are going to install it on the etherpad's home so, if you exited "etherpad" user session, remember to log in again: su - etherpad, and ensure you are in the home directory cd.
Create a directory to store all necessary. mkdir etherpad-lti-service
cd etherpad-lti-service

Download an old version of tomcat 5.5 . It could be installed on a newer Tomcat, but it would be harder to explain how to set it up. wget http://archive.apache.org/dist/tomcat/tomcat-5/v5.5.36/bin/apache-tomcat-5.5.36.zip
unzip apache-tomcat-5.5.36.zip
cd apache-tomcat-5.5.36
chmod +x bin/*.sh

Edit .bashrc file from etherpad's home to include Java and Tomcat system required variables: vim /home/etherpad/.bashrc
Add the following lines at the top, just after the # .bashrc export JAVA_HOME=/usr/share/java
export CATALINA_HOME=/home/etherpad/etherpad-lti-service/apache-tomcat-5.5.36

Save and execute
bash
Now check out the LTI connector for etherpad. That connector provides the LTI integration to Etherpad Lite. It was designed and developed by Mark J. Norton of Nolaria Consulting
cd ..
svn co https://source.sakaiproject.org/contrib/mnorton/etherpad-lti/
cd etherpad-lti

Create a file called build.properties and add the following line:
catalina.home=/home/etherpad/etherpad-lti-service/apache-tomcat-5.5.36

Compile with the command ant
Copy the needed libraries to the tomcat's common/lib directory.
cp lib/* ../apache-tomcat-5.5.36/common/lib/

Create a directory on tomcat to store the configuration, and copy it from the sample that exists in the source code
mkdir ../apache-tomcat-5.5.36/etherpad
cp web/eplti.properties ../apache-tomcat-5.5.36/etherpad

Configure the connector editing the file eplti.properties you just copied:
Oauth LTI secret. It will be used in Sakai basic LTI tool as shared secret
oauth.secret= YOUR-ETHERPAD-LITE-LTI-SECRET
Etherpad's API key. Fill it with the value you found in APIKEY.txt on etherpad-lite
api.key = ...
Set the the etherpad service URL and the administrator roles allowed to create and delete pads on a site: etherpad.url= http://your.host.name:9001
access.create.roles=Admin,Instructor,maintain
access.delete.roles=Admin,Instructor,maintain

Now, you can start the service:
cd ~/etherpad-lti-service/apache-tomcat-5.5.36
bin/startup.sh

Configure your sakai Basic LTI tool instance

Add a Basic Lti Tool on a site
Press the edit button on the top right of the tool and provide the following values:
Remote Tool Url: http://your.host.name:8080/eplti
Remote Tool Key: admin
Remote Tool Secret: YOUR-ETHERPAD-LITE-LTI-SECRET

That's all, if you access to the tool as Instructor you will see an screen that allows you to create a new pads associated with this site. Enjoy it.

Set up this as a service

If you look to keep it as a service, you must write two scripts on /etc/init.d to allow system start and stop the service on boot.
You can follow the instructions on Setup etherpad-lite as a service, or you can download the one I created to set up my server here
Also, you can download my version of Apache Tomcat script to set it up as a service here
After that you can register them with command chkconfig --add tomcat-lti
chkconfig --add etherpad-lite-lti setup
Then go to 'System services' and check the recent installed scripts.

Some references used to write that post that you would find useful.


How to install and run a node.js app on centos 6.4 64bit
Etherpad installation guide
Etherpad LTI Connector