Search  English (United States) Hrvatski (Hrvatska)

innovative promotional partnershipDriving the Future with Smart and Intelligent ICT

Technical co-sponsorship

 
Conferences
Opening ceremony
Forum
Workshops
Tutorials - CRO
Conferences
Exhibition

DC VIS Online lectures

DC VIS PPT Presentation Repository and upload

Basic information:

Chairs:

Karolj Skala (Croatia), Roman Trobec (Slovenia), Uroš Stanič (Slovenia)

Steering Committee:

Enis Afgan (Croatia), Piotr Bala (Poland), Leo Budin (Croatia), Jelena Čubrić (Croatia), Borut Geršak (Slovenia), Simeon Grazio (Croatia), Gordan Gulan (Croatia), Yike Guo (United Kingdom), Ladislav Hluchy (Slovakia), Željko Jeričević (Croatia), Peter Kacsuk (Hungary), Aneta Karaivanova (Bulgaria), Charles Loomis (France), Ludek Matyska (Czech Republic), Željka Mihajlović (Croatia), Damijan Miklavčič (Slovenia), Laszlo Szirmay-Kalos (Hungary), Tibor Vámos (Hungary), Matjaž Veselko (Slovenia)

Presented papers written in English and published in the Conference proceedings will be submitted for posting to IEEE Xplore.

Event program
Thursday, 5/28/2015 9:00 AM - 1:00 PM,
Camelia 2, Grand hotel Adriatic, Opatija
 Distributed Computing 
9:00 AM - 9:15 AML. Forer (Division of Genetic Epidemiology, Medical University of Innsbruck, Innsbruck, Austria), E. Afgan (Center for Informatics and Computing, Ruder Boskovic Institute, Zagreb, Croatia), H. Weissensteiner (Division of Genetic Epidemiology, Medical University of Innsbruck, Innsbruck, Austria), D. Davidović (Center for Informatics and Computing, Ruder Boskovic Institute, Zagreb, Croatia), G. Specht (Institute of Computer Science, Research Group Databases and Information Systems, Innsbruck, Austria), F. Kronenberg, S. Schoenherr (Division of Genetic Epidemiology, Medical University of Innsbruck, Innsbruck, Austria)
Cloudflow – A Framework for MapReduce Pipeline Development in Biomedical Research 
The data-driven parallelization framework Hadoop MapReduce allows analysing large data sets in a scalable way. Since the development of MapReduce programs is often a time-intensive and challenging task, the application and usage of Hadoop in Biomedical Research is still limited. We present EasyMR, a novel high-level framework to hide the programmatic overhead of writing Hadoop jobs by providing interfaces for (a) job setup and execution, (b) data management and (c) HDFS access. EasyMR abstracts the process of writing map and reduce methods, tailored for its usage in Biomedical Research. We demonstrate the benefit of EasyMR on the parallelization of a genome-wide association studies (GWAS) use case. We further show that the framework can be combined efficiently with Cloudgene and CloudMan, which allows us to provide developed applications as a service to everyone.
9:15 AM - 9:30 AMY. Gordienko (G.V.Kurdyumov Institute for Metal Physics, National Academy of Sciences, Kyiv, Ukraine), S. Stirenko (High Performance Computing Center, National Technical University of Ukraine “KPI”, Kyiv, Ukraine), O. Gatsenko, L. Bekenov (G.V.Kurdyumov Institute for Metal Physics, National Academy of Sciences, Kyiv, Ukraine)
Science Gateway for Distributed Multiscale Course Management in e-Science and e-Learning — Use Case for Study and Investigation of Functionalized Nanomaterials 
The current tendency of human learning and teaching is targeted to development and integration of digital technologies (like cloud solutions, mobile technology, learning analytics, big data, augmented reality, natural interaction technologies, etc.). But the available e-Science and e-Learning frameworks (for example, OpenCourseWare initiatives, like edX, Coursera, …) are: (a) very heterogeneous (the available learning content is quite different in various institutions), (b) far away from the real-life learning situations often, (c) in general, static like stationary web-pages with online access from desktop PCs only, (d) not flexible to the very volatile demands of end users (students, pupils, ordinary people) and real-world, (e) not-adjustable by complexity, duration, and range (e.g. for taking some modules, but not the whole course), especially in the life-long learning or vocational training, (f) the feedback is not available about the quality, value, and necessity of some parts of the course or combinations of some modules. Our Science Gateway (http://scigate.imp.kiev.ua) in collaboration with High Performance Computing Center (http://hpcc.kpi.ua) is aimed on the close cooperation among the main actors in learning and researching world (teachers, students, scientists, supporting personnel, volunteers, etc.) with industry and academia to propose the new frameworks and interoperability requirements for the building blocks of a digital ecosystem for learning (including informal learning) that develops and integrates the current and new tools and systems. It is the portal for management of distributed courses (workflows), tools, resources, and users, which is constructed on the basis of the Liferay framework and gUSE/WS-PGRADE technology. It is based on development of multi-level approach (as to methods/algorithms) for effective study and research through flexible selection and combination of unified modules (“gaming” with modules as with LEGO-bricks). It allows us to provide the flexible and adjustable framework with direct involvement in real-world and scientific use cases motivated by the educational aims of students and real scientific aims in labs. Its novelties and advantages in comparison to the numerous available online learning solutions consist in: (a) homogenization and smoother integration of the heterogeneous available content from various labs, departments, and institutions (by the standard requirements for the new modules and the wrappers of legacy modules), (b) close approach to the real-life learning situations, (c) dynamic content, i.e. some modules in the shape of Virtual Labs around the simulations or remote experimentations (if the latter will be found among the potential partners), (d) flexible combinations of various modules on demands of end users, (e) mutliscale modules with various versions differed by complexity (beginner, advanced, professional, etc.), duration like “nano-module” (1-5 min), “micro-module” (5-10 min), “macro-module” (0.5-3 hours), and range, (f) “billing” system around modules to provide the feedback for teachers about the quality, value, and necessity of some parts of the course or combinations of some modules, (g) open or commercial repositories of the “bricks”: modules, Virtual Labs, remote experimentations, (h) portal for content, tools, and user management. The feasibility of the proposed idea is already tested by the students and illustrated here for the course in materials science dedicated to functionalized nanomaterials, which are of great interest and demand in various physical, chemical, biological, and medical applications. In conclusion, the Science Gateway for distributed multiscale course (complex workflow) management in e-Science and e-Learning gives opportunity to remove obstacles (the current restrictions of time and physical space) for ubiquitous learning and researching models, which can be feasible and efficiently implemented.
9:30 AM - 9:45 AMA. Shamakina, L. Sokolinsky (South Ural State University, Chelyabinsk, Russian Federation)
Evaluation of POS Scheduling Algorithm for Distributed Computing Environment 
One of important classes of computational problems is the class of problem-oriented workflow applications executed in distributed computing environment. The problem-oriented workflow application can be represented by the oriented graph with the tasks as the nodes and the data flows as the arcs. For the problem-oriented workflow application, we can predict the execution time of the task and the amount of data to be transferred between the tasks. Nowadays, the significant number of scheduling algorithms for the distributed computing environment are proposed. Some of them (like DSC algorithm) take into account the peculiarity of problem-oriented workflow applications. Others (like Min-min algorithm) take into account the manycore structure of the node of computational network. However, no one of them take into account both these properties. We developed the new Problem-Oriented Scheduling (POS) algorithm for distributed cluster computing environment, which is able to plan the launch of a task on several processor cores, taking into account the limit of task scalability. The POS algorithm constructs a sequence of scheduling configurations in order to minimize a cost of the critical path in schedule. This algorithm was implemented in Java as a service based on UNICORE platform. Computational experiments demonstrated that the POS algorithm significantly outperforms DSC and Min-min algorithms by the total time of job execution.
9:45 AM - 10:00 AMA. Banka, M. Dedmari, M. Masoodi (Islamic University of Science & Technology, Srinagar, India)
Performance evaluation on a Grid platform 
The technological advancements have resulted in substantial increase of commodity computing, mainly as the outcome of faster hardware and more sophisticated softwares. In spite the presence of super computers in present age, all the problems in the fields of science, engineering and business cannot be efficiently and effectively dealt with. This is mainly because of complexity factor and cost margin. For a complex program presented by any of the fields above, the data provided requires a number of heterogeneous resources that are scattered across the globe hence making the problem very cumbersome to handle. To address this; concept of grid computing evolved which combines and connects all the required heterogeneous resources to form a single entity which can resolve the problem at hand. With the implementation of grid at practical level lot of problems arise which need to be addressed before the system is put to use. One of the main aspects that must be kept under check while executing a problem set on grid is the security, which mainly includes the privacy and the integrity of the data that becomes vulnerable due to its distributed nature. This paper mainly focuses on the implementation of grid and gives the idea of the exponential difference between the performances of a stand-alone system in comparison to grid. In its broader view we intend to discuss analysis of a specific problem set on both the platforms, and provide the analysed data to support the high end performing nature of the grid in contrast to a stand-alone system.
10:00 AM - 10:15 AMN. Gordienko (Phys.-Math. Lyceum 142, Kyiv, Ukraine), O. Lodygensky (LAL, University Paris South, Orsay, France), G. Fedak (University of Lyon, Lyon, France), Y. Gordienko (G.V.Kurdyumov Institute for Metal Physics, National Academy of Sciences, Kyiv, Ukraine)
Synergy of Volunteer Measurements and Volunteer Computing for Effective Data Collecting, Processing, Simulating and Analyzing on a Worldwide Scale 
The proposed topic concerns the new hype idea of “Citizen Science” and volunteer involvement in science. The main principle here is the paradigm shift: to go from the passive “volunteer computing” (widely used now in many fields of sciences) to other volunteer actions under guidance of scientists: “volunteer measurements”, “volunteer data processing and visualization”, “volunteer data mining”, etc. which can be carried out by ordinary people with modern standard computing units (PCs, smartphone, tablet, …) with various operational systems and the standard sensors in them or easily accessible measuring units. The most important aspects are possibilities to: (1) leverage the available “crowdsource” resources: machine (personal CPUs + sensors) and human (brains, manual operation of sensors) ones, (2) get huge number of volunteers (millions) temporally or during their whole life, (3) obtain the new scientific “quality” from these huge “quantities”, (4) involve ordinary citizens in scientific process, (5) report to society about the current scientific activities and priorities. The feasibility of the proposed idea was already tested by the students in the several awarded volunteer projects with the several types of activities in different fields of science and illustrated by some use cases here. Here the special attention is paid to the system of volunteer scientific measurements to study cosmic rays. It is considered to be the other (not alternative!) way to study air showers (created by cosmic ray) in addition to Pierre Auger Observatory, an international cosmic ray observatory designed to detect ultra-high-energy cosmic rays. The technical implementation is based on integration of data about registered night flashes (by radiometric software) in shielded camera chip, synchronized time and GPS-data in ordinary smart phones/tablets/other gadgets: to identify night “air showers” of elementary particles (personal “air shower” monitor); to analyze the frequency and distribution of “air showers” in the densely populated cities (for example, to create virtual online map of “air showers”). The project currently includes the students of the National Technical University of Ukraine (Kyiv, Ukraine), which are compactly located in Kyiv city and contribute their volunteer measurements. Some practical conclusions were derived on the current stage of the development of this project. Cosmic rays can be investigated in any place, not just in some parts of Earth. The technology would be very effective, if it will be automated (for example, on the basis of XtremWeb or/and BOINC technologies for distributed computing). This method could be very useful, if some small area will have many volunteers; especially this technology will be effective in universities and other educational institutions (Corporative/Community Crowd Computing). But the more fruitful results can be obtained under conditions of integration of volunteer measurements and volunteer computing on the basis of XtremWeb or/and BOINC technologies, for example, for comparison of measured and simulated (by AIRES system for air shower simulations) data. The project provides an additional (not alternative!) educational and research way to study cosmic rays (air showers) without huge installations, numerous high-qualified scientists, high basic and maintenance costs.
10:15 AM - 10:30 AMK. Cvetkov, S. Ristov, M. Gusev (Ss. Cyril and Methodius University, Faculty of Computer Science and Engineering, Skopje, Macedonia)
Sucssessfull Implementation of L3B: Low Level Load Balancer 
Cloud computing paradigm offers instantiating and deactivating the virtual machine instances on demand according to the clients’ requirements. When some customer’s application or a service needs more resources than a physical server can supply, the cloud provider offers a certain load balancing technique to distribute the load among several servers that will host the analyzed application or service. Additionally, the cloud provider should offer a resource broker to make the application scalable and elastic. In this paper we present a new solution for a low level load balancer, working on a network level. Our load balancer maps the memory addresses of the balancer and the target physical servers (or virtual machines on the cloud) and thus balances the load. The experiments showed that it achieves even a superlinear speedup (speedup greater than the number of scaled resources).
10:30 AM - 10:45 AMM. Riedel, M. Goetz, M. Richerzhagen, P. Glock, C. Bodenstein, S. Memon, S. Memon (Juelich Supercomputing Centre, Juelich, Germany)
Scalable and Parallel Machine Learning Algorithms for Statistical Data Mining - Practice and Experience 
Many scientific datasets (e.g. earth sciences, medical sciences) increase with respect to their volume or in terms of their dimensions due to the ever increasing quality of measurement devices. This contribution will specifically focus on how these datasets can take advantage of new ’big data’ technologies and frameworks that often are based on parallelization methods. Lessons learned with medical and earth science data applications that require parallel clustering and classification techniques such as support vector machines and density-based spatial clustering of applications with noise (DBSCAN) are a substantial part of the contribution. In addition, selected experiences of related ’big data’ approaches and concrete mining techniques (e.g. dimensionality reduction and feature extraction methods) will be addressed too.
10:45 AM - 11:00 AMBreak 
11:00 AM - 11:15 AME. Ivanova, L. Sokolinsky (South Ural State University, Chelyabinsk, Russian Federation)
Decomposition of Natural Join Based on Domain-Interval Fragmented Column Indices 
The paper describes decomposition of natural join relational operator based on the column indices and domaininterval fragmentation. This decomposition admits parallel executing the resource-intensive relational operators without data transfers. All column index fragments are stored in main memory in compressed form to conserve space. During the parallel execution of relational operators, compressed index fragments are loaded on different processor cores. These cores unpack fragments, perform relational operator and compress fragments of partial result, which is a set of keys. Partial results are merged in the resulting set of keys. DBMS use the resulting set of keys for building the resulting table. Described approach allows efficient parallel query processing for very large databases on modern computing cluster systems with many-core accelerators. A prototype of the DBMS coprocessor system was implemented using this technique. The results of computational experiments are presented. These results confirm the efficiency of proposed approach.
11:15 AM - 11:30 AMŽ. Jeričević (Department of Computer Engineering/Engineering Faculty, Rijeka, Croatia), I. Kožar (Department of Computer Modeling/Civil Engineering Faculty, Rijeka, Croatia)
Theoretical and Statistical Evaluation for Approximate Solution of Large, Over-Determined, Dense Linear Systems 
Abstract - The solution of linear least squares system requires the solution of over-determined system of equations. For a large dense systems that requires prohibitive number of operations. We developed a novel numerical approach for finding an approximate solution of this problem if the system matrix is of a dense type. The method is based on Fourier or Hartley transform although any unitary, orthogonal transform which concentrates power in a small number of coefficients can be used. This is the strategy borrowed from digital signal processing where pruning off redundant information from spectra or filtering of selected information in frequency domain is the usual practice. For the least squares problem the procedure is to transform the linear system along the column to the frequency domain, generating a transformed system. The least significant portions in the transformed system are deleted as the whole rows, yielding a smaller, pruned system. The pruned system is solved in transform domain, yielding the approximate solution. The quality of approximate solution is compared against full system solution and differences are found to be on the level of numerical noise. Theoretical evaluation of the method relates the quality of approximation to the perturbation of eigenvalues of the system matrix.. Numerical experiments illustrating feasibility of the method and quality of the approximation at different noise levels, together with operations count are presented.
11:30 AM - 11:45 AMJ. Rybicki, B. von St. Vieth (Forschungszentrum Juelich, Juelich, Germany)
DARIAH Meta Hosting: Sharing software in a distributed infrastructure 
Research infrastructures have become an everyday tool for doing science. They constitute a cost-effective, quick, and increasingly easy-to-use collaboration tool. So far the focus was on sharing resources (especially data) and on offering a (rigid) set of services for processing and accessing the resources. It becomes clear, however, that there is a demand from the users to share not only the data they have gathered or created but also the software they implemented. Such a sharing has the potential to speed-up the scientific discovery. But only if it runs i.e. if it becomes a service in the infrastructure. Unfortunately, it happens often that the software implemented in a project is only understandable and installable by the authors. In this paper we will address the the problem of sharing services between users (e.g. new data analysis tools) and also touch on the problem of sharing services between infrastructures for instance to facilitate cross-disciplinary exchange. Our goal was to increase the sustainability of the developed research software and enable the easy extension of research infrastructures beyond the rigid set of services towards flexible software-as-a-service (SaaS) solutions. We share the initial experiences gained during the implementation of meta hosting service for DARIAH-DE research infrastructure.
11:45 AM - 12:00 PME. Afgan (Ruđer Bošković Institute (RBI), Zagreb, Croatia), K. Krampis (Hunter College, New York City, United States), N. Goonasekera (University of Melbourne, Melbourne, Australia), K. Skala (Ruđer Bošković Institute (RBI), Zagreb, Croatia), J. Taylor (Johns Hopkins University, Baltimore, United States)
Building and Provisioning Bioinformatics Environments on Public and Private Clouds 
Unlike newly developed web applications that can be designed from the ground up to utilize the cloud APIs and natively run within the cloud infrastructure, most complex bioinformatics pipelines that are in advanced development state can only be encapsulated within VMs along with all their software and data dependencies. To take advantage of the scalability offered by the cloud, additional frameworks are required that stand up virtualized compute clusters and emulate the most common infrastructures found on institutional resources where most of the existing bioinformatics pipelines are generally run. In this paper we describe the automated process of deploying one such framework, it’s compatibility with commercial and academic cloud middleware solutions, and a launcher application for use by end users to provision their own virtual compute clusters.
12:00 PM - 12:15 PMM. Rantala, J. Soini (Tampere University of Technology, Pori, Finland), T. Kilamo (Tampere university of technology, Tampere, Finland)
Gathering useful programming data: analysis and insights from real-time collaborative editing 
The development of any real piece of software is a team effort. Traditionally, collaborative development has been practiced in open source communities, which is an example of a collaborative coding effort to build complex software systems. Even then, the cooperation is mostly on a coordination level. Today, web technology is sufficiently advanced to enable collaborative coding in real time as group work, which eases communication between team members. In this paper, we study this phenomenon from the point of view of knowledge transfer and learning. We have examined the possibilities and challenges in learning web software development during real-time group work. We have used two different example cases (code camps). Additionally, we have evaluated the validity of the log data created during the code camps. The research frame for this study is the utilization of log data visualization in evaluating group work and further development of said visualization in order to support software development.
12:15 PM - 12:30 PMD. Savchenko, G. Radchenko (South Ural State University, Chelyabinsk, Russian Federation), O. Taipale (Lappeenranta University of Technology, Lappeenranta, Finland)
Microservices validation: Mjolnirr platform case study 
The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. To provide autonomy of microservices and loose coupling of the microservice “hive”, each microservice should be isolated from others in order to communicate using interfaces only. We assume that each microservice must be deployed in isolated virtual machine or container and communicate with the others using HTTP protocol. Microservice architecture cannot be managed manually, because of highly distributed nature. To provide management and automatic scaling of microservices, there should be a software support by some “microservice management system”, providing facilities for storage and management (including automatic deployment and termination) of those isolated containers. This support can be provided by an PaaS cloud platform, in order to scale the system automatically, manage it and provide middleware for the message communication. Also, there should be implemented monitoring mechanisms to control the flow of the microservice application. It can be implemented as a specialized PaaS solution or integrated in the existing PaaS. There are some common approaches to the monolith application testing like unit-testing, integration testing and functional testing. Those steps could be automated to provide the Continuous Delivery, especially in clouds. But microservices architecture adds difficulties to the microservices testing: - The lack of the direct access to the execution environment and need of usage of mock cloud API when develop microservices. - Virtualization or containerization of microservices have a lot of limits in a terms of component communication. - There are some problems linked with automated scaling and deployment: the environment has to be changed according to the microservices configuration changes. Also, it is not possible to use local files to store some application-specific data or configuration. - Environment should take care of messaging between lots of components and handle possible deadlocks and other asynchronous messaging issues. So, the main goal of the research is to provide a methodology of microservice system testing and simulation system. To reach this goal we should solve the following tasks: - Define the process of microservice systems development; - Define special aspects of microservices testing; - Develop a methodology of microservices testing; - Provide a simulation of microservice platform, that should allow to investigate the performance features of cloud resource management methods.
12:30 PM - 12:45 PMA. Bánáti (Óbuda University, Budapest, Hungary), P. Kacsuk (MTA SZTAKI, LPDS, Budapest, Hungary), M. Kozlovszky (Óbuda University, Budapest, Hungary)
Four level provenance support to achieve portable reproducibility of scientific workflows 
In the scientist’s community one of the most vital challenges is the issue of reproducibility of workflow execution. In order to reproduce (or to prove) the results of an experiment, provenance information must be collected. Concerning the workflow execution environment we have differentiated four levels of provenance data: infrastructural, environmental parameters, the describers of the workflow model and input data. During the re-execution all of them can change and capturing the data of each levels targets different problems to solve. For example storing the environmental and infrastructural parameters enables the portability of workflows between the different parallel and distributed systems, such as grid, HPC or cloud. The describers of the workflow model enable tracking the different versions of the workflow and their impacts on the execution. Our goal is to capture the most optimal parameters in number and type as well in four levels and reconstruct the way of data production independently from the environment. In this paper we investigate the necessary and satisfactory parameters of this four level and we show the possibilities and usability of each level.
12:45 PM - 1:00 PMB. Ivanovska, S. Ristov, M. Kostoska, M. Gusev (Ss. Cyril and Methodius University, Faculty of Computer Science and Engineering, Skopje, Macedonia)
Using the P-TOSCA Model for Energy Efficient Cloud 
The Topology and Orchestration Specification for Cloud Applications (TOSCA) standard is used to describe a cloud application and cloud architecture in order to allow a portable deployment to other compatible cloud and multi-cloud applications. P-TOSCA is recently proposed model and proved concept, which is an extension of the TOSCA standard to improve the TOSCA’s ambiguities and weaknesses. In this paper we use the P-TOSCA model for other issues that are also very important in virtualized datacenters and cloud computing, that is, to enlarge/extend the energy efficient management system. A prototype application that dynamically creates a target virtual machine on utilized physical compute node, ports the application(s) from a virtual machine hosted on an underutilized physical server to the target virtual machine in Eucalyptus cloud is presented, which is specified with P-TOSCA. After migration, the prototype application will shut down the underutilized empty physical node.
1:00 PM - 3:00 PM Lunch break 
Thursday, 5/28/2015 3:00 PM - 7:00 PM,
Camelia 2, Grand hotel Adriatic, Opatija
 Distributed Computing 
3:00 PM - 3:15 PME. Kail (Budapest Óbuda University, Budapest, Hungary), P. Kacsuk (MTA SZTAKI, LPDS, Budapest, Hungary), M. Kozlovszky (Obuda University, John von Neumann Faculty of Informatics, Biotech Lab, Budapest, Hungary)
Achieving Dynamic Workflow Management System by Applying Provenance Based Checkpointing Method 
Scientific workflows are data and compute intensive thus may run for days or even for weeks on parallel and distributed infrastructures such as HPC systems and cloud. In HPC environment the number of failures that can arise during scientific workflow enactment can be high so the use of fault tolerance techniques is unavoidable. The most frequently used fault tolerance techniques are job replication and checkpointing. While job replication is based on the assumption that the probability of single failures is much higher than of simultaneous failures, the checkpointing saves certain states and the execution can be restarted from that point later on. The effectiveness of the checkpointing method depends on the checkpointing interval. Common technique is to dynamically adapt the checkpointing interval. In this work we compare and contrast the different checkpointing techniques and propose a new provenance based dynamic checkpointing method.
3:15 PM - 3:30 PMS. Ristov, M. Gusev (Ss. Cyril and Methodius University, Faculty of Computer Science and Engineering, Skopje, Macedonia)
Operating System Impact on CPU and RAM Utilisation when Introducing XML Security 
Introducing XML security to a web service increases the message size, which impacts the XML parsing of the greater message and complicates its processing due to complex cryptographic operations. Both tasks impact the web server’s CPU utilisation. The performance impact is even more expressed when the number of concurrent messages is rapidly increased. In this paper we analyze the impact of securing a web service message with XML Signature and XML Encryption over the hardware performance, varying the message size and the number of concurrent messages. The results show that web server that is installed on Linux utilizes the CPU less than the same web server that is installed on Windows.
3:30 PM - 3:45 PMF. Dika (University of Vienna, Vienna, Austria), V. Xhafa (University of Prishtina, Prishtina, Kosovo)
Evaluating the impact of the number of processor cores, cache memory and threads during the parallel execution of programs 
The speed of the computers work depends directly on the number of processor cores used in the parallel execution of the programs. But, there are some other parameters that have impact on the computer speed, like maximum allowed number of threads, size of cache memory inside the processors and its organization. To determine the impact of each of these parameters, we have experimented with different types of computers, measuring the speed of their work during solving a certain problem. To compare impacts of particular parameters on the computers speed, the results we have presented graphically, through joint diagrams for each parameter.
3:45 PM - 4:00 PMK. Kolic, M. Gusev, S. Ristov (Ss. Cyril and Methodius University, Faculty of Computer Science and Engineering, Skopje, Macedonia)
Performance Analysis of a New Cloud e-Business Solution 
In this paper we explore if the applications with scaled user demands, such as the number of HTTP transactions achieve proportionally scaled performance. In addition, we explore which configuration is most convenient and achieves the best performance as a final conclusion about performances when applying the scaling of resources or demands. The experiments are performed for an e-Business solution hosted as a SaaS application on Windows Azure. Finding an optimal cloud infrastructure is not a trivial problem, since there are a lot of possibilities to organize multiple virtual machines in a cloud. The results obtained from the experiments show that the optimal configuration is based on use of a lower number of bigger VMs. In addition to this, the number of operations has an impact on the performance proportionally.
4:00 PM - 4:15 PMS. Memon, M. Riedel (Juelich Supercomputing Centre, Forschungszentrum Juelich GmbH, Juelich, Germany), C. Koeritz, A. Grimshaw (Department of Computer Science, University of Virginia, Charlottesville, United States)
Interoperable Job Execution and Data Access through UNICORE and the Global Federated File System 
Computing middlewares play a vital role for abstracting complexities of backend resources by providing a seamless access to heterogeneous execution management services. Scientific communities are taking advantage of such technologies to focus on science rather than dealing with technical intricacies of accessing resources. Enabling computing middleware for big data platforms is becoming a significant goal for multi-disciplinary scientific disciplines. Middlewares can assist in implementing use cases for analytics and processing large data sets. There are applications in many scientific disciplines for anaylzing big data on high-performance computing resources. To support that effort, we have taken the UNICORE HPC middleware and extended it to integrate with the Global Federated File System. In this paper we present how GenesisII users can leverage the high performance computing resources through UNICORE and the Global Federated File System to support data science across institutions. For that we propose an open standards based integration of UNICORE and the GFFS.
4:15 PM - 4:30 PMP. Škoda (Ruđer Bošković Institute, Zagreb, Croatia), V. Sruk (Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia), B. Medved Rogina (Ruđer Bošković Institute, Zagreb, Croatia)
Multi-stream 2D frequency table computation on dataflow architecture 
Frequency table computation is a common procedure used in variety of machine learning algorithms. In this paper we present a parallelized kernel for computing frequency tables. The kernel is targeted for dataflow architecture implemented on field programmable gate array (FPGA). Its performance was evaluated against a parallelized software implementation running on a 6 core CPU. The kernel with six concurrent input data streams running on 300 MHz achieved speedup of up to 6.26×, compared to 6 threaded software implementation running on 3.2 GHz CPU.
4:30 PM - 4:45 PMM. Kostoska, A. Donevski, M. Gusev, S. Ristov (Ss. Cyril and Methodius University, Faculty of Computer Science and Engineering, Skopje, Macedonia)
Porting an N-tier Application on Cloud using P-TOSCA: A Case Study 
Although many companies and universities migrated their services, applications and data on the cloud, still significant part did not move yet, due to variety of reasons. Migration to other platform can cause the client many problems, and several directions have emerged to automate it as much as possible. P-TOSCA is a recently proposed Platform as a Service (PaaS) extension for TOSCA for automated application portability. In this paper, we demonstrate the migration of an application with most common $N$-tier architecture from on-premise to cloud, as well as from one cloud to another, by using P-TOSCA portability model, the development of application topology and execution plan. Although the presented demo was tested to migrate and transfer an $N$-tier application to Eucalyptus and OpenStack open source clouds, this model and procedure can be used for automated migration and transfer among any open standard that supports PaaS and P-TOSCA application specification.
4:45 PM - 5:00 PME. Markoska, I. Chorbev, S. Ristov, M. Gusev (Ss. Cyril and Methodius University, Faculty of Computer Science and Engineering, Skopje, Macedonia)
Cloud Portability Standardisation Overview 
With the development of various cloud technologies and their increased usage, the newly arised challenge is the migration of a cloud-based application and data among various cloud environments and vendors. In the absence of a standardised mean by which full interoperability and portability can be achieved, various tools exist in the form of APIs, standards, and protocols. In this paper we conduct an overview of the state of the art regarding the topic of migration and portability of cloud-based applications, in order to determine the possible challenges that today's technologies might have, as well as to find the most appropriate automated way to port an application from one to another cloud, or to migrate an on-premise application to the cloud.
5:00 PM - 5:15 PMT. Križan, M. Brakus, D. Vukelić (Poslovna inteligencija d.o.o., Zagreb, Croatia)
In-Situ Anonymization of Big Data 
With organisations openly publishing their data for further processing, privacy becomes an issue. Such published data should retain its original structure while protecting sensitive personal data. Our aim was to develop fast and secure software for online and/or offline anonymization of (distributed) Big Data. Herein, we describe speed and security requirements for anonymization techniques, popular methods of anonymization and deanonymization attacks. We give a detailed description of our solution for in-situ anonymization of Big Data distributed in a cluster together with performance benchmarks done on a provided real Telco customer data record (CDR) dataset (dataset size is around 500 GB).
5:15 PM - 5:30 PM Coffee break 
Visualization Systems 
5:30 PM - 5:45 PMZ. Juhasz, G. Kozmann (University of Pannonia, Veszprem, Hungary)
A GPU-based Simultaneous Real-Time EEG Processing and Visualization System for Brain Imaging Applications 
This paper describes the current status of a novel GPU-based EEG processing system under development at the University of Pannonia. Traditional EEG measurement evaluation is typically performed in Matlab which is a very convenient environment but its computational speed is unacceptably low. State-of-the-art EEG systems can employ up to 128 electrodes and work at 2 kHz sampling frequency. Even a few second measurement in such a system can generate GBytes of data whose processing can take several minutes in Matlab. In several application areas, such as epilepsy diagnosis, multi-patient studies, much faster processing speed should be required. We have developed a GPU-based massively parallel system that demonstrates that near real-time processing speed is achievable in key brain imaging algorithms, such as the spherical surface Laplacian or the forward solution for source localization. At the centre of our implementation is the GPU architecture that acts as a computing and visualization engine. The most advanced cards today provide 1-5 teraflops computational performance on a single chip using approximately 100 W power consumption. This performance-per-watt cannot be matched in any other CPU-based parallel system. The final paper will describe the architecture of our system, the most crucial design decisions, the forward solver and the surface Laplacian algorithms and their parallel implementation. The inter-operation of the CUDA computing and OpenGL visualization subsystems as well as several performance optimization techniques that ensure the maximum utilization of the GPU processors are described in detail highlighting the importance of carefully optimizing data transfer paths, GPU memory usage, and how to achieve maximum instruction throughput. The most important result presented in the paper is the achieved computational performance - up to 3 orders of magnitude faster than Matlab -- that is maintained during simultaneous interactive 3D visualization of the results. Our forward solver implementation is faster than any other GPU-based methods known to us in the literature. Our fast implementation of the surface Laplacian can serve as a basis for high-resolution cognitive studies and BCI applications. The system is integrated into a 4K resolution 2x2-screen display wall system creating a very high-performance, yet cost-effective brain imaging system. Future development plans include the integration and coupling of other imaging modalities such as MRI and fMRI as well as implementing fast realistic head-model based imaging algorithms.
5:45 PM - 6:00 PML. Becirspahic, A. Karabegović (Faculty of Electrical Engineering, University of Sarajevo, Sarajevo, Bosnia and Herzegovina)
Web Portals for Visualizing and Searching Spatial Data  
Spatial data have proven to be extremely useful in many areas. Accordingly, there are more and more web GIS solutions. These solutions allow the visualization of spatial data to all Internet users. A large number of available online resource introduced the problem of efficient retrieval of relevant information. The aim of this paper is to describe the way in which web portals can be used to search and presentation of spatial data. The paper first analyzes the area of the web portal. It provides an overview of the web GIS technology, with particular emphasis in its advantages and disadvantages and how the client/server architecture can be applied to web GIS solutions. Then, it describes how the service oriented architecture can be used to resolve interoperability problems, and gives an overview of OGC standards: WMS, WFS, WCS. In the end, it explains the need for introducing a geoportal, as a special type of a web portal. In the pratical part of this paper is implemented interactive map of Bosnia and Herzegovina. An overview of the technologies was given: PostGIS, Boundless SDK, GeoServer and OpenLayers, which are an integral part of the OpenGeo Suite. The paper also describes how to integrate local data sources with public web services: MapQuest, Bing Maps and OpenStreetMap.
6:00 PM - 6:15 PMJ. Opiła (AGH University of Science and Technology, Cracow, Poland)
Prototyping of Visualization Styles of 3D Scalar Fields Using POVRay Rendering Engine 
There is a persistent quest for novel methods of visualization in order to get insight into complex phenomena in scientific domains as various as physics, biomedicine or economics. Research teams involved achieved excellent results, however some problems with elaboration of novel visualization styles connected with flexibility of the software used and quality of the final images still persist. In the paper results of inspection of four visualization styles of 3D static scalar field employing POVRay ray-tracing engine are discussed, i.e. equipotential surface method using direct implementation of isosurface{} object, multilinear cellular interpolation approach, application of texture and eventually pseudoparticles design. All styles presented have been tested for hybrid visualizations and compared concerning computing time, informativeness and general appearance. It is shown in the work that Scene Description Language (SDL), domain specific language implemented in POV-Ray is flexible enough to use it as a tool for fast prototyping of novel visualization techniques. Visualizations discussed in the paper were computed using selected components of API of ScPovPlot3D, i.e. templates written in the SDL language.
6:15 PM - 6:30 PMM. Volarević, P. Mrazović, Ž. Mihajlović (Faculty of Electrical Engineering and Computing, Zagreb, Croatia)
Freeform spatial modelling using depth-sensing camera 
This paper proposes a novel 3D direct interaction modelling system for manipulating voxel based objects using currently available 3D motion sensing input devices such as Microsoft Kinect. The guiding principle while developing the system was to imitate natural human modelling behaviour and provide a real life experience of 3D object manipulation, inspired by the techniques used in modelling clay. Properties of a functional prototype application are presented and software architecture of the created tool is analysed. Descriptions of newly developed algorithms are included, grouped into several categories. Visualization algorithms are used for defining properties and creating usable modelling mass from volumetric models. On the other hand, algorithms related to object recognition and human computer interaction include various techniques for depth segmentation, contour detection, finger recognition and virtual control with gestures.
6:30 PM - 6:45 PMM. Ivančić, Ž. Mihajlović (University of Zagreb Faculty of Electrical Engineering and Computing, Zagreb, Croatia), I. Ivančić (University of Zagreb Faculty of Science Department of Geophysics, Zagreb, Croatia)
Seismic data visualisation 
The focal mechanism of an earthquake and fault plane solutions provide insight into the geology of an area. The objective of this paper is to give a means of better understanding fault movement when an earthquake occurs. In this paper we present an interactive 3D visualisation of possible fault plane solutions, which are usually represented as 2D “beach-ball” plots, with their respective block models. Additionally a 3D alleviation display of an area, with respective beach ball earthquake representations, provides a more in depth picture of seismological maps under the earth’s surface. The program solution allows a better grasp of space, which is useful to students of seismology and anyone who wants to visualize fault movement of an earthquake.
6:45 PM - 7:00 PMM. Kozlovszky (Obuda University, MTA SZTAKI, Budapest, Hungary), D. Zavec Pavlinić (Titera Technical Innovative Technologies Ltd., Maribor, Slovenia), A. Oder (Prevent&Deloza Ltd., Celje, Slovenia), G. Fehér (OMT-LAB, Budapest University, Budapest, Hungary), P. Bogdanov (Obuda University, Biotech Lab , Budapest, Hungary)
Situation and location awareness in harsh environment 
Enhanced environment awareness is a key element of life protection in dangerous situations. Thus the clothing industry is forced in the direction to develop highly innovative “intelligent” products, and equip their products with sensors and actuators combined with smart technologies. Besides basic functions of protection, thermal and ergonomic comfort, these new generation protective garments have additional functions such as monitoring and/or alarming for prevention in case of emergency situation. Such systems are costly because even if the clothing and engineering parts of the development are feasible, their combination is far from trivial. The different system parts have different requirements, and these requirements are sometimes not compatible with each other. Parameters such as cost, bulkiness, accuracy, independency, scalability, robustness are key parameters, which can cause high acceptance or usage prevention rapidly by the end-users. Our research is focusing on the seamless integration of the various subsystems to develop effective and user friendly, intelligent situation aware personal protection systems. Our solution helps to receive accurate information about user performance and activity conditions. The main goal of our work is to adapt the already existing monitoring solution and provide it to people working in harsh environment. The paper describes the results of our development with the description of the main functionalities.
Friday, 5/29/2015 9:00 AM - 1:00 PM,
Camelia 2, Grand hotel Adriatic, Opatija
Visualization Systems 
9:00 AM - 9:15 AMH. Kostadinov (IMI-BAS, Sofia, Bulgaria), N. Manev (USEA "Lyuben Karavelov" and IMI-BAS, Sofia, Bulgaria)
Error Correcting Codes and Their Usage in Steganography and Watermarking 
Steganography and Digital Watermarking are concerned with embedding information in digital media such as images, audio signals and video. Both scientific disciplines develop methods for conceal message (a sequence of bits) by modifying the host (cover) digital object but their goals are slightly different. In this work we discuss two methods of information embedding in spatial domain: a matrix embedding by q-ary codes and a method based on pseudo-noise patterns and which explodes the erasure capability of error correcting codes. We have made numerous experiments with picture from several galleries (with grey-scale and color images) and many error control codes for both type of algorithms. Our observations show that q-ary codes are good choice in the case of syndrome embedding.
9:15 AM - 9:30 AMM. Kranjac (Faculty of technical sciences Novi Sad, Novi Sad, Serbia), U. Sikimić (Politechnico di Milan, Milan, Italy), M. Vujaković (Center for standardization and certification, Novi Sad, Serbia)
Model for calculation of post service efficiency by using GIS 
The goal of the paper is to find model of using GIS in monitoring and evaluation of postal service management. The methodology which is used is two visualize some data about work efficiency by creating separate layers and analyzing them. The analysis of results shows big potential of GIS as a tool for economic analysis. In the case of postal system of Serbia it proves that cross cutting analysis of various factors and their visualization gives better model for economical improvements of system management. The scientific contribution of this paper is to introduce GIS as valuable tool for economical research.
9:30 AM - 9:45 AMD. Letic, I. Berkovic (Tehnicki fakultet "Mihajlo Pupin" Zrenjanin, Univerzitet u Novom Sadu, Zrenjanin, Serbia)
Generalization of Hypersphere Function 
In this paper are presented the results of theoretical researches of hypersphere function on the basic of generalization of two known functions referring to hypersurface and hypervolume of sphere and supposed recurrent relation between them. On the basis of the two introduced freedom degrees is performed the generalization of these functions. So, we have got a special continual function, i.e. generalized hypersphere function. Symbolic evaluation and numerical experiment are realized by the program packages MathCAD Professional and Mathematica.
9:45 AM - 10:00 AMS. Rizvic, V. Okanovic, A. Sadzak (Faculty of Electrical Engineering Sarajevo, Sarajevo, Bosnia and Herzegovina)
Visualization and multimedia presentation of cultural heritage 
Understanding the past is one of the key factors of human culture. Cultural heritage contributes to the preservation of collective memory. It is difficult to imagine the original appearance of the monuments while observing their archeological remains. Digital technologies are an efficient tool for visualization and multimedia presentation of cultural heritage. This paper describes the use of these technologies through the workflow of the White bastion 4D visualization project, from collecting information, through 3D modeling and texturing to interactive web implementation using the most advanced web 3D technologies. We discuss the advantages and drawbacks of each technology for this particular purpose.
10:00 AM - 10:15 AMD. Sušanj, V. Tuhtan (Faculty of Engineering , Rijeka, Croatia), L. Lenac (Engineering Faculty, Rijeka, Croatia), G. Gulan (Faculty of Medicine, Rijeka, Croatia), I. Kožar (Department of Computer Modeling Civil Faculty of Engineering , Rijeka, Croatia), Ž. Jeričević (Department of Computer Engineering Faculty of Engineering , Rijeka, Croatia)
Using Entropy Information Measures for Edge detection in Digital Images 
Shannon information entropy measures were used as filters of different kernel sizes to detect edges in digital images. The concept is based on communications theory with splitting of edge detection kernel into source and destination parts. The arbitrary shape of the kernel parts and the fact that information filter output is a real number with reduced problem of edge's continuity represents the major advantage of this approach. The results are compared and combined with traditional edge detection algorithms like Sobel to illustrate performance and sensitivity of the information entropy filters. Theoretical results on differently shaped synthetic edges and experimental results on sampled images with different noise levels and intensity resolutions were studied in detail. The real life examples are taken from medical X-Ray imaging of series of knee joints in order to illustrate the algorithm performance on real data.
10:15 AM - 10:30 AM Coffee break 
Biomedical Engineering 
10:30 AM - 10:45 AMM. Brložnik (Clinic for small animals PRVA-K, Ljubljana, Slovenia), V. Avbelj (Jožef Stefan Institute, Ljubljana, Slovenia)
Wireless Electrocardiographic Monitoring in Veterinary Medicine 
A comfortable option for long term monitoring of heart activity will hopefully soon be available in the veterinary medicine also. This is the first report of the ECG data obtained in animals with a wireless body electrode attached to the skin and connected to a smart phone via low power Bluetooth technology. The ECG data were obtained from two dogs, one with sinus rhythm and the other with atrial fibrillation and ventricular extrasystoles. Beside determination of average heart and respiratory rates, identification of sinus rhythm, atrial fibrillation and ventricular premature complexes, the device is able to identify ventricular preexcitation and asynchronous depolarization of ventricles, which were not seen in a standard 6-lead ECG. With appropriate software this device offers an immense potential for the veterinary cardiology. Due to its simplicity, long autonomy and high reliability it can complement the conventional Holter monitoring in the veterinary medicine.
10:45 AM - 11:00 AMM. Mandžuka (Faculty of Electrical Engineering, University of Sarajevo, Sarajevo, Bosnia and Herzegovina), E. Begić (Faculty of Medicine, University of Sarajevo, Sarajevo, Bosnia and Herzegovina), Z. Begić (Paediatric Clinic, Clinical Center University of Sarajevo, Sarajevo, Bosnia and Herzegovina), D. Bošković (Faculty of Electrical Engineering, University of Sarajevo, Sarajevo, Bosnia and Herzegovina)
Software for Acid-Base Status Disorders Management 
Normal acid-base status represents the value of hydrogen ions in extracellular fluid is within pH range of 7.36 (44 nmol/L) and 7.44 (36 nmol/L). Acid-base balance disorders may be a result of enhancement or reduction in amount of hydrogen ions, and include four basic disorders: metabolic and respiratory acidosis, metabolic and respiratory alkalosis. First step implies diagnosing the type of acid-base status disorder. The application input consists of blood analysis data and body weight info. Next step is determining the significance of the disorder and finally recommending future treatment. Software offers the optional functionality of entering mineral level values to improve treatment recommendations. Following the latest trends, along with the aim of increasing availability and mobility of the system, the software is developed for the Android platform using the Java programming language.
11:00 AM - 11:15 AMI. Tomasic (Mälardalen University, Västerås, Sweden), R. Trobec, A. Rashkovska (Jožef Stefan Institute, Ljubljana, Slovenia), M. Lindén (Mälardalen University, Västerås, Sweden)
Impact of electrode misplacement on the accuracy of 12-lead ECG synthesized from differential leads 
It is a well-known fact that electrode misplacement of standard electrode positions, for the acquisition of the 12-lead ECG, can cause differences in ECG interpretation. It has also been shown that misplacement of electrodes employed by standard 12-lead ECG reduced lead sets, has effects on synthesized 12-lead ECGs. In this study we are using body surface potential maps (BSPM) to investigate how the electrode misplacement effects 12-lead ECGs synthesized from differential leads (DLs). The 35-lead multichannel ECGs (MECGs) were collected from 20 healthy volunteers and 27 cardiac patients. The integral BSPM, together with gradients, were calculated for each person and for complete beats, as well as for P, QRS, and ST-T intervals. We have considered the worst-case scenario to be if each electrode was moved by a fixed distance in the direction of its associated gradient vector. The simulated displaced leads were taken from the BSPM and used for the synthesis of the 12-lead ECG. The resulting 12-leads ECGs were compared to the target 12-lead ECGs, and to those synthesized from correct positions. The differences between three ECGs were compared by means of root-mean-squared-distance and correlation coefficient.
11:15 AM - 11:30 AMV. Avbelj (Jožef Stefan Institute, Ljubljana, Slovenia)
Extranodal Pacemaker Activity During Sleep - a Case Study of Wireless ECG Sensor Data 
Sleep apnea is a breathing disorder during sleep which is characterized by breathing cessation lasting at least 10 seconds. Sleep apnea is associated with sudden cardiac death. Every apnea event is accompanied by cardiovascular changes. In polysomnography, to perform cardiorespiratory sleep study in today’s sleep laboratories, many parameters such as ECG (electrocardiogram), EEG (electroencephalogram), respiration effort, oronasal airflow, snoring (by microphone) and oxygen saturation in blood are recorded. We present a case where only single channel ECG obtained from wireless electrodes was used to study heart-rate dynamics during sleep apnea events. A sudden change of morphology of atrial P wave in ECG recording was observed during sleep. This observation supports the hypothesis that sinus node is not the only physiological pacemaking structure in the atrium.
11:30 AM - 11:45 AMM. Vasconcelos, L. Rosado (Fraunhofer Portugal AICOS, Porto, Portugal), M. Ferreira (Portuguese Institute of Oncology, Porto, Portugal)
A New Color Assessment Methodology using Cluster-based Features for Skin Lesion Analysis 
Prevention is essential to fight against Melanoma, the most dangerous form of skin cancer. The risk assessment of skin lesions usually follows the ABCD rule (asymmetry, border, color and dermoscopic structures) and here a methodology to assess the number of ABCD rule colors is presented. It starts by extracting 660 color features and several feature selection and machine learning classification methods are used. The methodology has the advantage of being adaptable to the dataset used. Two dermoscopic image datasets and one mobile acquired dataset were used to test the methodology, achieving accuracy rates of 77.75%, 81.38% and 93.55%, respectively.
11:45 AM - 12:00 PMU. Čibej (University of Ljubljana, Faculty of Computer and Information Science, Ljubljana, Slovenia), . Lojk, . Pavlin (University of Ljubljana, Faculty of electrical engineering, Ljubljana, Slovenia), L. Šajn (University of Ljubljana, Faculty of Computer and Information Science, Ljubljana, Slovenia)
Automatic Adaptation of Filter Sequences for Cell Counting 
Manual cell counting in microscopic images is usually tedious, time consuming and prone to human error. Several programs for automatic cell counting have been developed so far, but most of them demand some specific knowledge of image analysis and/or manual fine tuning of various parameters. Even if a set of filters is found and fine tuned to the specific application, small changes to the image attributes might make the automatic counter very unreliable. The goal of this article is to present a new application that overcomes this problem by learning the set of parameters for each application, thus making it more robust to changes in the input images. The users must provide only a small representative subset of images and their manual count, and the program offers a set of automatic counters learned from the given input. The user can check the counters and choose the most suitable one. The resulting application (which we call Learn123) is specifically tailored to the practitioners, i.e. even though the typical workflow is much more complex, the application is easy to use for non-technical experts.
12:00 PM - 12:15 PMD. Zazula, J. Kranjec, P. Kranjec, B. Cigale (Univerza v Mariboru, FERI, Maribor, Slovenia)
Assessing Blood Pressure Unobrtusively by Smart Chair 
We developed a smart chair with unobtrusive sensors that measure functional-health parameters of a person sitting on the chair. Capacitive sensors are placed in the chair’s backrest and seat, while the armrests support a combination of U-shaped electrodes and incorporated photoplethysmographic (PPG) sensors for synchronous electrocardiographic (ECG) and PPG measurements. In a set of experiments with 11 young males, the two types of signals were acquired. Time distances were estimated between the ECG R-wave peaks and the PPG foots as detected by our heartbeat search, yielding the so called pulse transit times (PTTs). The experiments were conducted in two phases: the first one at rest and the second one after a minute of intensive squats. Referential systolic and diastolic blood pressures were taken just before every trial by a Critikon Dinamap Pro 300 sphygmomanometric device. Our goal was to model the relationship between blood pressure and estimated PTTs in both experimental phases. The obtained linear models revealed an interesting observation. Subjects reacted in two different physiological ways; 6 out of 11 participants conformed to a different model as the other 5 did. The main difference is the rate of decrease of either systolic or diastolic pressure per 1-ms change of the PTT.
12:15 PM - 12:30 PMM. Pavlin (Hyb d.o.o., Šentjernej, Slovenia), F. Novak (IJS, Ljubljana, Slovenia)
Towards noninvasive bioimpedance sensor design based on wide bandwidth ring resonator 
Miniature and noninvasive sensors play a major role in healthcare, by monitoring vital health parameters in real time. Furthermore, energy efficiency is an important issue in human-centric embedded systems. A contact-less bioimpedance measurement principle has been developed and validated. The proposed sensor aims at providing detailed insight into tissue composition and water concentration variation, which could in turn be implemented in a new low power miniaturized medical device. The basic idea stems from the permittivity measurement of electronic circuit substrate. A common procedure in this case was to design and fabricate a substrate with a given resonator structure where the dielectric constant can be calculated from the measured resonant frequency and known resonator geometry and substrate thickness. Similar principle can be employed for bioimpedance measurement. In our case, the resonator was designed such that the resonant frequency lies within the responsive range of the target tissue where the resonant frequency changes are highest following physiological processes. The changed value of the frequency reflects the dielectric properties of the target object. The measured frequency thus implicitly reflects biological and physiological processes within the target object. Consequently, this approach offers an innovative way to respiratory cycle monitoring of a patient. Typical application is apnea monitoring during sleep providing adequate warning to caregivers of certain life-threatening respiratory events. The above idea has been developed and evaluated by numerical simulations. The model of the resonator structure was studied under different boundary conditions and loads (i.e., different human body tissues) according to the expected application in practice. Obtained results were within the range of the human bio-impedance frequency response. Simulation results were compared with measurements on physical prototype of the resonator. Resonator measurements were performed with a vector analyzer. The key parameter for oscillator design (scatter parameter S21) was measured between 2MHz and 6GHz. The simulation and measurements results showed good match at resonant frequencies. This confirmed the feasibility of the proposed approach. The proposed principle is implemented in a prototype of a contact-less bioimpedance device consisting of a low-power embedded microcontroller and a DDS (Direct Digital Synthesizer unit), focused on low power consumption, low cost and miniaturization.
12:30 PM - 12:45 PMD. Tomic (Hewlett-Packard, Zagreb, Croatia)
Exploring bacterial biofilms with NAMD, a highly scalable parallel code for complex molecular simulations 
Bacterial biofilms are highly complex structures. We are more and more recognizing that bacterial biofilms are predominant forms of the bacterial existence against the planktonic one. Under certain circumstances, bacteria starts to build biofilm and forms 3D structure, so called extracellular matrix. Hulled within this matrix, bacteria becomes more prone to host defense mechanisms and most antibiotics, thus expressing considerably higher virulence and antibiotic resistance than its planktonic form. Because of their importance, numerous researchers investigated bacterial biofilms in the last decades, using numerous methods, like electron microscopy, mass spectroscopy and nuclear magnetic resonance. However, neither of these methods is able to reveal an exact structure of extracellular matrix. Exploring dynamics of extracellular matrix is even more complex, and out of the reach for known analysis methods. For these reasons, there is a need for more effective method, and this could be computer driven simulation. In order to check if it could be a method of choice, we estimated the computational resources needed to simulate the bacterial biofilm. We found that possibility of performing this simulation in the reasonable time on fastest supercomputers today does not exists, and will not be available until at least 2028. For this reason, we explored possibilities of running NAMD based bacterial biofilms simulations on Cloud, and landed with the same conclusion. Besides, we found that for both approaches NAMD has to extend its scalability from about current 500.000 cores to many millions of cores in the future.
12:45 PM - 1:00 PMA. Balota (Fakultet za informacione tehnologije, Podgorica, Montenegro)
Information System of the Institute for Blood Transfusion of Montenegro 
U ovome radu je prikazan razvijeni informacioni sistem Zavoda za transfuziju krvi Crne Gore, koji funkcionišu u sklopu Integralnog informacionog Sistema zdravstva Crne Gore. Uvođenjem informatičke podrške transfuziološke djelatnosti obezbijeđena je veća dostupnost i tačnost podataka o davaocima u sistemu zdravstva, kao i praćenje zaliha krvi na centralizovan način na nivou države Crne Gore. Informacioni sistem Zavoda za transfuziju krvi je uvođenjem bar-kodova na uzetim jedinicama krvi obezbijedio poštovanje osnovnog transfuziološkog principa praćenja krvi od vene davaoca do vene primaoca. Informacioni sistem će povećati mogućnost kontrole i analize rada, racionalnu upotrebu resursa, kao i osavremeniti bazu zdravstveno-statističkih podataka iz transfuziološke oblasti. Administrativni rad zdravstvenih radnika, koji se odnosi na višestruko upisivanje podataka biće smanjen, kao i mogućnost greške. Interoperabilnost poslovnih procesa je u potpunosti ostavrena korišćenjem zajedničke informatičke infrastrukture zdravstvenog Sistema, a krajni cilj ovakvog pristupa jeste poboljšanja kvaliteta, sigurnosti, efikasnosti, finansijske odgovornosti i kliničke upotrebe komponenti krvi u Crnoj Gori.

Scope:

The conference is devoted to presenting and exploring scientific and technological advancements and original innovative applications in the fields of Distributed computing,  Visualization systems and Biomedical engineering (BME). Topics for this conference include, but are not limited to:

  • Distributed Computing related topics:
    • Grid Computing and Applications
    • Cluster Computing and Applications
    • Distributed and Parallel Programming Models and Tools
    • Distributed Research Infrastructure
    • Parallelization Strategies and Parallel Algorithms
    • Heterogeneous Computing, Adaptive and Reconfigurable Computing
    • Multi Core and Many Core Technologies and Programming
    • Virtual Organizations
    • Web Services and Applications
    • Internet Computing and Applications
    • e-Science Technologies
    • Multimedia and Hypermedia Technologies
    • Earth Sciences with Applications
    • Performance Scalability, Analysis and Benchmarks
  • Visualization related topics:
    • Scientific Visualization
    • Visualization in Engineering and Medicine
    • Parallel Visualization Methods and Algorithms
    • Biomedical Visualization
    • Distributed Visualization
    • Visualization Processing and Systems
    • Parallel Modelling and Rendering
    • Computer Interaction and Vision Applications
    • Computer-Aided Design
    • Visual Datamining
    • Visual Analytics
  • Biomedical Engineering (BME) related topics:
    • Informatization, Management and Organization of BME Environments
    • Bioinformatics and Computational Biology and Medicine
    • Communication, Networking and Monitoring in Bio-systems
    • Monitoring of Vital Functions with Sensor and ICT Systems
    • Biosensors and Sensor Networks
    • Advanced Bio-signal Processing
    • Distributed BME Applications
    • Telehealth, Telecare, Telemonitoring, Telediagnostics
    • e-Healthcare, m-Healthcare, x-Health
    • Assisted Living
    • Smartphones in BME Applications
    • Social Networking, Computing and Education for Health
    • Computer Aided Diagnostics
    • Improved Therapeutic and Rehabilitation Methods
    • Intelligent Bio-signal Interpretation
    • Data and Visual Mining for Diagnostics
    • Advanced Medical Visualization Techniques
    • Personalized Medical Devices and Approaches
    • Modelling and Computer Simulations in BME
    • Human Responses in Extreme Environments
    • Other Emerging Topics in BME

Official language is English.

Important dates:

Registration / Fees:
REGISTRATION / FEES
Price in EUR
Before 11 May 2015
After 11 May 2015
Members of MIPRO and IEEE
180
200
Students (undergraduate and graduate), primary and secondary school teachers
100
110
Others
200
220

Contact:

Karolj Skala
Rudjer Boskovic Institute, Bijenicka 54
HR-10000 Zagreb, Croatia
GSM: +385 99 3833 888
Fax: +385 1 4680 212
E-mail: skala@irb.hr

Jelena Cubric
Rudjer Boskovic Institute, Bijenicka 54
HR-10000 Zagreb, Croatia
E-mail: jcubric@irb.hr

The best papers will get a special award.
Accepted papers will be published in the ISBN registered conference proceedings. Presented papers published in the Conference proceedings will be submitted for posting to IEEE Xplore.
Authors of outstanding papers will be invited to submit the extended version of their papers to a special issue of Scalable Computing: Practice and Experience (ISSN 1895-1767) published in the first quarter of 2016.


International Program Committee General Chair:

Petar Biljanović (Croatia)

International Program Committee:

Alberto Abello Gamazo (Spain), Slavko Amon (Slovenia), Vesna Anđelić (Croatia), Michael E. Auer (Austria), Mirta Baranović (Croatia), Ladjel Bellatreche (France), Eugen Brenner (Austria), Andrea Budin (Croatia), Željko Butković (Croatia), Željka Car (Croatia), Matjaž Colnarič (Slovenia), Alfredo Cuzzocrea (Italy), Marina Čičin-Šain (Croatia), Marko Delimar (Croatia), Todd Eavis (Canada), Maurizio Ferrari (Italy), Bekim Fetaji (Macedonia), Tihana Galinac Grbac (Croatia), Paolo Garza (Italy), Liljana Gavrilovska (Macedonia), Matteo Golfarelli (Italy), Stjepan Golubić (Croatia), Francesco Gregoretti (Italy), Stjepan Groš (Croatia), Niko Guid (Slovenia), Yike Guo (United Kingdom), Jaak Henno (Estonia), Ladislav Hluchy (Slovakia), Vlasta Hudek (Croatia), Željko Hutinski (Croatia), Mile Ivanda (Croatia), Hannu Jaakkola (Finland), Leonardo Jelenković (Croatia), Dragan Jevtić (Croatia), Robert Jones (Switzerland), Peter Kacsuk (Hungary), Aneta Karaivanova (Bulgaria), Dragan Knežević (Croatia), Mladen Mauher (Croatia), Igor Mekjavic (Slovenia), Branko Mikac (Croatia), Veljko Milutinović (Serbia), Vladimir Mrvoš (Croatia), Jadranko F. Novak (Croatia), Jesus Pardillo (Spain), Nikola Pavešić (Slovenia), Vladimir Peršić (Croatia), Goran Radić (Croatia), Slobodan Ribarić (Croatia), Janez Rozman (Slovenia), Karolj Skala (Croatia), Ivanka Sluganović (Croatia), Vlado Sruk (Croatia), Uroš Stanič (Slovenia), Ninoslav Stojadinović (Serbia), Jadranka Šunde (Australia), Aleksandar Szabo (Croatia), Laszlo Szirmay-Kalos (Hungary), Davor Šarić (Croatia), Dina Šimunić (Croatia), Zoran Šimunić (Croatia), Dejan Škvorc (Croatia), Antonio Teixeira (Portugal), Edvard Tijan (Croatia), A Min Tjoa (Austria), Roman Trobec (Slovenia), Sergio Uran (Croatia), Tibor Vámos (Hungary), Mladen Varga (Croatia), Marijana Vidas-Bubanja (Serbia), Boris Vrdoljak (Croatia), Robert Wrembel (Poland), Damjan Zazula (Slovenia)

Location:

Opatija, with its 170 years long tourist tradition, is the leading seaside resort of the Eastern Adriatic and one of the most famous tourist destinations on the Mediterranean. With its aristocratic architecture and style Opatija has been attracting renowned artists, politicians, kings, scientists, sportsmen as well as business people, bankers, managers for more than 170 years.

The tourist offering of Opatija includes a vast number of hotels, excellent restaurants, entertainment venues, art festivals, superb modern and classical music concerts, beaches and swimming pools and is able to provide the perfect response to all demands.

Opatija, the Queen of the Adriatic, is also one of the most prominent congress cities on the Mediterranean, particularly important for its international ICT conventions MIPRO that have been held in Opatija since 1979 gathering more than a thousand participants from more than forty countries. These conventions promote Opatija as the most desirable technological, business, educational and scientific center in Southeast Europe and the European Union in general.


For more details please look at www.opatija.hr/ and www.opatija-tourism.hr/

 

Download
 
News about event
Currently there are no news
 
Media sponsors

 

 

 

 

 

 

 

 

 

 

 

 

 
Patrons - random
Pomorski fakultet RijekaTehnički fakultet RijekaFOI VaraždinIRB ZagrebHAKOM