Log on:
Powered by Elgg

Camilo Silva :: Blog

April 14, 2010

Hello there!

Well, it has been a while since I don't stop by and post something. I guess I should mention that it has been nearly two years since my last post and much has happened in my life after PIRE!

Im not sure where to start! 

I graduated as a computer engineer in Fall 2009 and I'm currently completuing my last semester as in Mathematical sciences at FIU. My main goal after graduation is to be able to earn my Ph.D.--which has been a dream that starting evolving after my research experience at the PIRE program.

Well, after my PIRE program concluded, I started working with  Dr. Sadjadi as an undergraduate researcher. I continued my work with my past project along with Mr. Mike Robinson. By the end of April 2009, our research project's poster was exposed in different conferences ranging from local, state and national ones such as: NCUR, FGLSAMP Expo 2009, and ISBRA 2009. Along with my research team, we were able to earn different awards as well for our poster presentation. By the end of April 2009, we submitted our paper as a technical report to FIU.

During Summer 2009, I had the opportunity to travel to Boston and participate in a summer research program as an Amgen scholar at MIT. I worked in a Department of Bioengineering, where I worked on a project dealing with protein-protein interactions in response systems and be able to map such interactions in networks using high-througput data. I really enjoyed my time at MIT since I learned a lot about Bioinformatics and Genetics. At the same time, I was able to experience what a college life is all about--since at FIU the college life is limited due to its structure as a commuter campus.

In fall 2009, I only dedicated my time to my studies and to my Senior Design project. This was a really difficult semester for me--full of personal challenges... Yeah... During this time, I broke up with my girlfriend (we had a realtionship of over six years). O well, on a positive note, our Sr. Design project was a total success! Along with my team mates we developed a mobile device from scratch--YES, from scratch, meaning that we literally designed everything and putted everything together in less than four months! We used a microchip dsPIC 16-bit microcontroller, an SI3000 codec and sampler module,we used the Speex codec library, a GLCD, and to power our device 9V battery. The GLCD needed approx. 5V and our Microncontroller worked with 3.3 V--huge power saver! Obviously, our power requirements were not as perfect and we ended up losing 0.7V of power give or take--but, who cares? Our communication pattern resembled a client-server design that was designed according to our testing needs although it is not the practical and efficient way to do it. We had to develop our own communication protocol for the packages that were sent and received by the mobile device to the server and then to the next mobile device. We programmed using C++ and C.

During my Spring 2010 semester, I started working as a full-time intern at Crispin Porter + Bogusky, one of the most talented and renown advertisement firms of this era. During my experience, I grew a lot as a software developer! I implemented software engineering design patters such as the Model-View-Controller on my web projects, I programmed my first functional Android application that will be used on an important ad campaig for one of the clients of CP+B (sorry, I can't tell you the details due to a confidentiality agreement! :P ). Alll in all, such experience just made me real thirsty to learn more about mobile development and in becomeing an experienced and talented programmer!

And as of this moment, I am supposed to be working on my take-home exam for Advanced Differenetial Equations, but I figured that sharing my experiences out of the blue with you might be more self-distracting in a positive way rather than checking my Facebook!

Honestly, it has not been an easy path--that of a software developer. but sure it has been fun and worth it! The best thing is that there is always tons to learn and much to discover!

Cheers mate! (with my Spanish accent).



Keywords: blog, experience, graduation, mobile development, software engineering, work

Posted by Camilo Silva | 0 comment(s)

July 29, 2008


Camilo A. Silva

July 21-27


Time has gone fast. This was my last week in Guadalajara. But, let me tell you—what a great time I had! I am happy because I had a great opportunity to live. To go to another country and get to familiarize with it is a memorable experience. In my case, I was able to excel in my studies. But, also I was able to meet new Mexican friends.


Ok. Going back to my activities, during this past week I had the meeting with my team members on Monday, where we discussed the progress of our work—and guess what? We completed it. All the parallelization of our project is completed. Only one thing was left: the error handling of the program. Thus, both Gary and I decided to work together in this task.


My friend Gary did a great job in getting this running promptly, while it took me a bit longer to complete mine. Therefore, we were able to run the first set of data during the weekend (Michael kindly asked GCB users to let us the cluster). The first set of data results were completed with no errors so far. There was one complication, though—this will be commented below in the “ISSUES/PROBLEMS” sections.


All in all, I was able to complete my program. I was able to fulfill my goal. My program is a parallel program that is capable of managing MPI communication errors only.


On Wednesday, I had my last meeting with Dr. Duran. During that meeting I shared with him the progress of my job and the steps to follow afterwards. He was of great help through out my visit.


Another important thought that I want to emphasize is that I never expected to have such a great working experience with my friends from the Bioinformatics group. I was amazed at how well we were able to “virtually” work together from different parts of the globe: China, Mexico, and USA. Truly, I was also glad by the great leadership and companionship from Mr. Michael Robinson. He was always there to help me, full of patience, and good guidance. My partner Gary is a great member as well—a hard worker and very knowledgeable about programming. I sincerely feel that I am with the best team members.


Looking back to the wonderful time that I have spent in Guadalajara, I am proud to say that I do not regret anything at all. I am sincerely grateful with FIU CIS and the PIRE program for giving me this prestigious opportunity.  


One important thing, through out my PIRE experience I had the chance to work with many important people that helped me solved issues. I never had the opportunity to thank them publicly so I want to take the time to thank them all:

  1. I want to thank God for giving the opportunity to participate in this research experience
  2. All the FIU students that helped me: David, Juan Carlos, both Javier Delgado and Javier Figueroa, and Michael Robinson
  3. Big thanks to Michael Robinson because he was a great team leader I am both proud and happy to have partnered with him and Gary
  4. Many thanks to Gary because he gave me tons of insight in my programs
  5. Special thanks to all Professors and Administrators in charge of PIRE: Dr. Sadjadi, Dr. Graham, Ms. Carbajo, Dr. Hector Duran, Dean Yi Deng, and every single person that helped out with the PIRE program
  6. Lastly, special thanks to both of my Professors and Advisors in Mexico and USA: Dr. Duran, and Dr. S. Masoud Sadjadi

I am proud to say that I completed my proposed plan for the Summer. I completed my parallelized program with the capability of self-healing whenever an MPI error message is detected at the time the master node sends a message to the slave nodes.

Just as I mentioned in the “ACTIVITIES” section, there was a little problem that we had during the runtime of our project. Yesterday night during our meeting, Michael shared with us that the problem dealt with something known as “memory leakage.” To be honest, I am not quite sure what the cause of the problem is and how it should be resolved. This is something that as a group we will find out and that Michael decided to look on.

The big plan now is to write the technical paper and complete my PIRE DVD on time.




FIRST READINGS: “MPI Error Handling”












The basic theory that I learned from all those articles about error handling is that the MPI communicator is more than just a group of processes that belong to it. Amongst some of the items that belong to the communicator and that it hides inside its body is the error handler. It is important to point out that whether an error message is printed or not, it depends on its implementation.


MPI error(s) arise whenever there are messages that are incorrectly constructed, addressed, set or received. Please note that MPI does not provide mechanisms for dealing with failures in the communication system. What MPI does is that it provides mechanisms to solve recoverable errors. Simply meaning that the default error handler of aborting an MPI program will be replaced with an appropriate error handler. Thus, in order for the application to identify the error code, the MPI_Error_class routine converts any error code into one of a small set of standard errors codes known as error classes. Furthermore, MPI provides only two types of predefined error handlers: MPI_ERRORS_ARE_FATAL which is the default that causes the MPI program to abort whenever an error is found. And, the other is MPI_ERRORS_RETURN which causes MPI to return an error value instead of aboting.


Since I read a whole lot of different papers or documents, I could generalize and comment that all of them were helpful. Some of the documents were heavy in a lot of theory which was a bit monotonic, while others were very simple in the definitions and provided great examples. All of them were easy to comprehend.


This topic was extremely important for the last part of my project because it helped me a lot in establishing a self-healing system to my project.















I wanted to learn what strategies were present in the debugging process of MPI. And I was able to be exposed to techniques that are used nowadays to debug parallel programs. The first technique that I learned was “printf()” debugging. This technique is not that effective in parallel programs because of the multiplicative effect meaning that many nodes will be printing the same thing unless there is something in the print out that could identify them. Also “printf()” techniques can only display a limited subset of the process state.


Other types of debugging techniques are to use the serial debuggers in parallel. Although serial debuggers were not developed to be used in parallel programs, they might provide some insight in finding certain bugs.


Memory Checking debuggers look for erroneous patterns such as accessing memory outside of an array or the local stack using heap memory that was already freed. One of the advantages of using such is that they report all errors in a file and with line numbers. The downside is that it cannot be used interactively and cannot be attached to already-running processes.


The last category of techniques for debugging is known as the parallel debuggers. Besides all the common functionality known from all debuggers, these type of debuggers are capable of setting breakpoints, examining variables, stepping through code, and also individually monitor and control all processes in a running MPI job.


This topic was very interesting. And all the authors did a great job in explaining it. Although most of the information could be found by reading only one of the five articles that I read.


This information was important to me because it helped me in understanding what the different types of debugging techniques are that could be used in a parallel MPI program.


Posted by Camilo Silva | 0 comment(s)

July 21, 2008

Just wanted to take this time and let you all know that Colombia's Independence from Spain was declared on July 20 of 1810. For all of those that have Colombian friends its not too late to congratulete us for our Independence.

There is a website that I found that talks about the story of our Independence, check it out:



Have a nice day my friends and God Bless you and my beautiful Nation, Colombia!!!


Keywords: Colombia

Posted by Camilo Silva | 1 comment(s)


Camilo A. Silva

July 21


During last week, I participated on REU’s 2nd meeting where I shared my project progress to my peers. I presented to them the details of the MPI program and all of its communication patterns.


On Monday and Wednesday, I had my weekly meetings with my group members where we shared the progress of the project so far and discussed issues and challenges to be solved.


I am happy to report that the parallel program is running! The only thing is that it is only running successfully whenever there are less or equal number of tasks than nodes. Whenever there are more tasks than nodes there is an I/O file open error found. Such bug should be solved soon.

The biggest challenge that I had this week dealt with the communication with my group members.  Specifically, we were dealing with a problem about a queue implementation for the parallelized project. However, I already had such implementation active in the parallel code so there was no need to do it again. I tried to explain that to my group members via EVO, but unfortunately the message was not well understood.


Fortunately, on our second meeting of the week, we went over my parallel code and I showed them the queue implementation which they completely understood. Furthermore, I presented to them a file I/O error that the sequential code was throwing in cases where there are more tasks submitted than the nodes present. I provided to my group members a print out of the error. One of my group members identified the problem or bug and agreed to help solve it.


Another challenge that I have deals in learning the procedure of writing a technical paper. Dr. Sadjadi provided with great insight in how to learn by reading sample technical papers and ask for help from my team members.  


First of all, the biggest plan right now is to have the parallelization program running in the cluster perfectly. There is only one bug to fix which deals with some file I/O of the sequential code of the application.


Secondly, my goal is to have an autonomic computing implementation ready for the parallel program as well.


Lastly, my goal is to start writing the technical paper and do my best to have it ready by the end of next week.




Keywords: progress report

Posted by Camilo Silva | 0 comment(s)

July 14, 2008


Camilo A. Silva

July 14, 2008


This week was instrumental in fulfilling the objective of parallelizing the project of our group. A lot of work has been invested in this good cause. At the beginning of the week, I finished a testing model that would perform the message passing communication just as designed and desired. The testing model worked as planned. Without difficulties. Without worries.


Past Thursday, I started to modify the code that was created for the testing model to be adapted for the project18 code to be parallelized. This adaptation was “completed” on Friday. Tests started to be executed, but little did I know that many troubles awaited me. Since that Friday, I have been performing tests—nonstop. It has been a learning experience. Someone that has been instrumental in overcoming the challenges is Michael Robinson, our group leader. On Saturday, we had an informal conference call in order to work out some issues with the parallelization of the code. We were able to work something out and find one of the bugs. Once fixed, on that night some more tests were submitted. However, the tests had errors—this time those errors dealt with file permissions. On Sunday, early morning, I did some modifications as far as the access of the files needed and tried to run the program once again. To my surprise the program was executed successfully. Although the program runs successfully whenever there are less or equal number of tasks submitted to the same number of nodes, in the case when there are more tasks than nodes the program does not run successfully. I am hoping to fix that bug soon.

The parallelization of the program was completed.

Most of the challenges faced dealt with the debugging of the parallelization code.


The goal for this week is to have the parallelized code working efficiently. Also, I plan to implement the self-healing and self-optimized functions. I plan to start writing the technical paper as well as building the website.




Keywords: bioinformatics, report

Posted by Camilo Silva | 0 comment(s)

July 07, 2008

Camilo A. Silva

June 29 – July 6



This past week was essential because it was the time for me to perform some tests on the MPI-I/O capabilities when writing files collectively (when all nodes write to the same file) and independently. About three different tests were made. I reported my results to my group of Bioinformatics and provided them with the results as well.


Last week, there was only one group meeting with my team and we discussed our goal in having the parallelization of our project complete by July 15. It seems that we will be able to meet our goal only if we can work on the parallelization of the program in the next couple of days and execution during the weekend.


I have also met with Dr. Hector Duran and I shared with him our group’s goals and deadlines for the days to come.

Last week was a success. I was able to program different MPI programs that tested MPI-IO capabilities and be able to learn how those worked in the writing and reading of files in the cluster.


Moreover, I completed a power point presentation that presents a design in parallelizing our project. Such design would serve as a programming roadmap for the parallelization of the code. The presentation could be found here:


It is entitled as "Parallelizing ... Bio Project" 

I am glad that everything is moving forward; now it’s time to put everything together and put it to work!    


I just had some technical problems about the proper usage of some parameters of the MPI-IO functions—especially the buffer. I was able to resolve such issues by running the tests on the Grid and, by trials I was able to understand how the buffer is supposed to be used properly.

Implement MPI in the sequential code for our project, parallelize it, run some tests, and have it ready by the end of this weekend.


FIRST PAPER: Overview of the MPI-IO parallel IO interface

by: Corbertt, Peter; et al.


This document talks about the MPI/IO interface and that it is supposed to be used as an asynchronous I/O allowing computation with I/O and optimization of physical file layout on storage devices. The overview of MPI-IO is to have I/O modeled as a message passing by fulfilling some proposed goals such as: target scientific applications as well as other applications, have a real world need, and have a clear performance over functionality. In essence, MPI-IO should be used in order to read and write files in a collective manner—where all processors in the cluster would be able to access them.


The paper starts by talking about data partitioning and the authors explain that MPI derived data types are used in MPI to describe how data is laid out in the user’s buffer. Thus, MPI-IO uses some elementary derived data types known as filetype and buftype. A filetype simply defines a data pattern that is replicated along throughout the file, MPI derived data types consist of fields of data that are that are located at specified offsets.


The next topic that the authors discussed was on MPI-IO data access functions. This topic explained the importance of understanding that in a parallel environment, the system must decide whether a file pointer is shared by multiple processes or if it will be accessed by a single process. The authors did a great job at explaining terms and definitions, for example they simply defined the file pointer to be used to keep track of the file position.


In the last topics, the authors talked about blocking and non-blocking synchronization. They explained that blocking I/O calls will block until completed, while non-blocking I/O calls only initiates an I/O operation but it does not wait for it to complete. This topic led to the last topic on file layout and coordination, which explained in detailed that MPI-IO is intended as an interface that maps between data stored in memory and a file.


This paper was extremely helpful for me because it provided tons of insight about MPI-IO and how its structure is defined along with its main purpose and objectives. I was able to learn more about the “inside” picture of how a collective file creation would be handled and completed. This information will help me in the completion of my project due to the fact that it seems that we will be implementing an application of our project that will be using collective I/O commands.


SECOND PAPER: Sowing MPICH: a Case Study in the Dissemination of a Portable Environment for Parallel Scientific computing

by: William Gropp and Ewing Lusk


This paper explained the whole process in how MPICH was putted together and covers interesting information on its architecture. It covers topics on preparing software for unknown environments, preparing a structure software to absorb contributions by others, automating the creation of manual pages and documentation, automating pre-releases and managing the inevitable problem reports with a minimum of resources for support.


The author did a great job at explaining all the details of MPICH they successfully covered and explained the goals of MPICH, multisite development, portability, managing documentation, automated testing, release fore distribution, and discussion on the tools for managing interactions with users. Something that I learned from them is that the goal of the MPI implementation, MPICH is simply to have robustness, performance, and portability. Pretty much they presented the aspects of all the development of the MPICH project. They provided the techniques and tools that might be common to any project whose goals is in creating portable, parallel tools and distributing them to a community.


This paper truly is not related with my research, but I found interesting in learning how MPICH was developed. I was hoping to learn more about the functions of MPICH and more details on the application during run time, however.


THIRD PAPER: Dynamic Process Management in an MPI Setting

by: Gropp and Lusk


This paper focuses on how processes are managed during runtime. A description of an architecture of the system runtime environment of a parallel program that separates the functions of a job scheduler, process manager, and message passing system is given by explaining some important components such as the job scheduler, process manager, and the message passing system. An important fact that I learned is that a parallel program never runs isolated, it must have computing and other resources processes to be started and managed. Thus, one way to decompose the complex runtime environment is to separate the functions of the job scheduler, process manager and message passing library and security.


The job scheduler function is to allocate the resources of a parallel program as well as the time when the parallel program will run. The process manager is in charge of managing a process once started—specifically the standard input, output, and error signals. The message passing library is used by the program for its interprocess communication. Finally, security ensures that the job scheduler does not allocate resources that are not supposed to be allocated, that the process manager indeed manages the process that it starts, and that the message passing library delivers the messages only to their propoer destinations.


The authors did a great job in explaining the different components needed for a parallel environment. Furthermore, they cover an important topic which is about the communication of each different component. They went over three different types of communication applications such as task farming, dynamic distribution, and client/server communication. All in all, the authors were able to explain in good detail the environment where a dynamic process management takes place and some types of applications that could use it.


This paper helped me in my research by better understanding how a dynamic process management works. Specifically, I wanted to find some insight in how to self optimize an MPI process. Happily, I was able to get some ideas as how—in this case by exploring the job scheduler which is the one in charge of assigning the resources to a task. I have not an idea how to do that yet, but I will be looking and researching a bit more on such.    


Posted by Camilo Silva | 0 comment(s)

July 01, 2008

I was having some difficulties with a simple MPI0-IO testing program. The program is supposed to ask for the user's input to give a name of a file. The file would be created if it does not exist and then it would be open so that 'x' number of nodes in the cluster could write on it collectively. Thus, there would be only one single file with the content given from the nodes.

My first problem dealt in learning that the path "PATH=/opt/mpich/gnu/bin:$PATH" is an environmental variable and every time one logs out it would be discarded. Well, I did not know that! hehehe! So every time I tried to input the information to my program I could not!

My last challenge was in broadcasting a message to all nodes. In this case, I needed to broadcast to all nodes the file name to be opened. Thanks to a MPI-IO program that a group member of my team provided me, I was able to find out what I was missing--and, I was able to fix it.

In conclusion, after a lot of trials, I was able to carry forward my goal for the day and the solution for my little challenges were found!


Keywords: challenge, MPI, MPI-IO, solution to a problem

Posted by Camilo Silva | 0 comment(s)

June 30, 2008

I have some pictures that I woukld like to share with all of you. These are pictures of my friends Sean and Allison in CUCEA (the university campus). You will see our contact Professor, Dr. Hector Duran, below on some of the pictures. He's a great person.


Since it was not that hot and it was rather chilly outside, we decided to work in the cyberforest section that this campus has. Each table has next to it electric outlet and a Ethernet outlet to connect to the LAN. Also, there is Wi/Fi around but the signal is sometimes weak. After a couple of hours working, we went to see Dr. Duran for our weekly progress report meeting.


We usually meet once per week. However, last week we have met for two days both Tuesday and Thursday. During our meetings Dr. Duran helps us in giving us guidance, insightful feedback, and positive criticism.




Keywords: CUCEA, Hector Duran, professor, Progress reports, Research experience, weekly meetings

Posted by Camilo Silva | 0 comment(s)

Camilo A. Silva

June 23 – June 29

Action. That’s what this whole week has been about. I had the time to read some papers and further my studies on the topics of MPI, MPICH, autonomic computing, and a little on the usage of the Rocks GCB cluster.   On Monday June 23, I had my group meeting with my bioinformatics team members and I was able to share with them my last presentation on MPI: MPICH implementation and derived data types.

During that meeting, it was decided to start designing the structure models for the communication of the MPI program for project18 parallelization. Thus, I volunteered to design the model and I was able to present this model to my group members on the following meeting of the week on Wednesday, June 25.  During that meeting, we were able to discuss that the first parallel implementation of project18 is to simply start the processing of different discriminating probes genomes on the different nodes. In other words, project18 would be sent to each node of the GCB cluster and after it has finished computing, the result files will be saved and accessed in the /share/../bioinformatics/results/ folder.   

In order to handle files, MPI possesses a library of I/O functions; such library is known as MPI-IO. Therefore, my task from that Wednesday meeting until today was to learn MPI-IO and be able to run a simple test on the GCB cluster, where a file is created and opened collectively so that all nodes could write on it. Additionally, I wanted to create a different IO test that allows each node to open a file and read its contents and append info on it. 

I have been able to report my progress with Dr. Duran here at the University of Guadalajara. Every Tuesday and Thursday from 4:00 p.m. – 5:00 p.m, Sean, Allison and I have a meeting with Dr. Duran. I have been able to talk with Dr. Duran about autonomic computing self-healing properties for my project as well as the MPI-IO implementations that I needed to learn. I was able to share to him that the self-healing implementation of my project was not as concrete as I was expecting since I have not had a chance to program the parallel program of it, and I was not fully aware of what faults I would be expecting besides the famous ones of “a node going down or connection losses.”


The great accomplishment for this week is that the design structure of the MPI communication of the program was completed. The power point presentation could be found in here http://latinamericangrid.org/elgg/camilo.silva/files/23 

Another accomplishment is that I was able to learn the basis of MPI-IO after completing a lot of readings. Here are some of the materials that guided me tremendously in order to learn the basics of MPI-IO:

I could say that the biggest issue until now (BTW, this is something that I am still trying to solve) it’s a technical issue. Through out this weekend, I was working on some testing programs for the MPI-IO functions in order to practice and learn how they perform in the cluster. The testing program that I created needed interaction with the user—meaning that it was asking the user to input some information such as the name of the file to be created or opened—what happened, unfortunately, was that at the time of run-time, the program would ignore the input from the user and it would just carry along until the end of the program. It was kind of funny to see a program act this way! So, the only thing left to do was to Google. And so I tried to look for more similar examples asking the user for input, and I compiled them as well with hopes of solving the problem. But, guess what? The same error was constantly happening over and over again. 

It was until today, Monday, June 30 2008 that I decided to consult other friends of mine that are more knowledgeable and experienced about MPI-IO to give me a hand. Thus, I am still waiting to solve this little issue of user interaction during the execution of a parallel program.   

The major goal for this week is to write the parallel code for project18 and hopefully have a test run over this weekend.  




These are a collection of documents that I found online from credible sources that talk about the basics of MPI-IO. The important thing that I learn about MPI-IO is that it allows the programmer to design a file IO system where all the nodes can access collectively a particular file. What that means is that a file could be opened and all the nodes will be able to write on that same file by following an offset that is generated after each node has written to the file. There are different types of functions depending on the objective and purpose of the parallel program that will be run. Some of the functions are categorized as blocking and non-blocking functions. There are some other functions that allow a file to be saved non-contiguously or contiguously.  

I found that the different references were helpful in different ways. For example, the first reference that is from Argonne labs in Chicago focused their presentation not only on the basics of MPI-IO but also in some other topics such as sparse matrix I/O, passive target RMA and improving performance. In the document material of Indiana.edu I found very interesting all the program examples and detailed explanations of them. On the mhpcc.edu document, I liked very much how each function was described and how all of its parameters were presented and explained.  

The information that I learned from these documents was really important because I was able to learn all the basics of MPI-IO. Mostly everything that I earned would be used in project18. Thus, these documents will be of great reference for the work I am currently doing.

Posted by Camilo Silva | 0 comment(s)

June 24, 2008

Hello there friends! I just wanted to let you know how I was doing aftert the break-in which was my previous personal blog. Well, the following day, I moved to a suit hotel that was huge--I spent two days over there. Afterwards, I moved to another hotel that is antique and colonial Hotel de Mendoza. I spent a week over there and pretty much I had to grab a cab to go to the University because it was kind of far.

I just want to take a moment and thank God as well, just as some of my peers have done so, because I have been able to experience challenges as well opprtunities to excel during my visit to Mexico. Personally, I feel happy for this great opportunity--it has helped me expand my mind and recognize the impact of global communications of our era. Truly, thanks to the Internet, I have been able to communicate with all my close friends, family mambers, and most importantly my girlfriend (jeje).

I have been able so far to work closely with my group of Bioinformatics and even lecture them on a topic that I was not comfortable at all since I did not have experience in it: MPI and MPICH. I was able to communicate with them globally through EVO and work in our project together.

Furthermore, I wanted to share with all of you that my professor here, Dr. Hector Duran has been a tremendous help! He is more than willing to meet with us on a regular basis. Every week we meet for about thirty minutes either on Mondays or Thursdays and we discuss our challenges, ideas , and prospective plans for my project. I could say that the best thing that I like about Dr. Duran is his ability to guide and provide constructive feedback in a simple and positive way. Also, he does not mind at all explaining topics which one might not know.

As far as the research experience and laboratories, I am happy to report that our lab has AC! Yes! Believe it or not AC in Guadalajara is extremely hard to find! Everyday we go to CUCEA with my friends Sean and Allison by riding the bus and we spend the day from 9:30 a.m. - 6:30 p.m. Sean and Allison had been great friends. I am so thankful to have been sharing my time with them! I have learned really cool stuff from them ranging from YUI to WOW (World of Warcraft). On the other hand, I am happy to report that my group's first parallel program was succesfully run today in the GCB cluster around 6:30 p.m.! YEEEEEYYYY! Please take a look at the pics:

As far as cultural entertainment, I just want to share with you briefly some of the places that we have visited:
-Guadalajara's Downtown--has some of the oldest buildings of Mexico
-La Chata Restaurant
-Don Quixote Ballet Performance at Degollado Theater located in the City's Downtown
-My personal favorite, Santo Coyote restaurant! This is the best place to eat!

I have too many pictures to share with you so I would like to invite you and check my personal photo gallery at:
http://latinamericangrid.org/elgg/camilo.silva/files/ there, you will find pictures of my trip so far--BTW, I created a folder just for the restaurant Santo Coyote!

After all, everything works for the BEST! Cool

Keywords: Bioinformatics, Camilo, CUCEA, Mexico, MPI, Projects, Research

Posted by Camilo Silva | 0 comment(s)

<< Back