|
|
 |
Hibridni događaj
|
| 9:00 - 9:30 Prezentacija tvrtke
dSPACE engineering d.o.o. |
B. Sirovec, T. Livaja (dSpace engineering d.o.o., Osijek, Croatia), K. Pardon (dSpace engineering d.o.o., Zagreb, Croatia) Latest and Secured Communication Systems and Architectures for the Next-generation Vehicle 
In the automotive industry, reliable and secure communication systems are very important to ensure safety, performance, data security, and integrity. Various communication concepts and protocols are described, beginning with an overview of basic unsecured communication protocols and their inherent vulnerabilities. The implementation principles of end-to-end (E2E) communication are highlighted, explaining their benefits and limitations. Furthermore, the concept of protected communication is examined in the context of automotive-specific challenges, including real-time constraints in a heterogeneous component environment. The paper describes secure communication mechanisms, such as encryption, authentication, and intrusion detection. Finally, testing approaches are presented, including simulation and hardware-in-the-loop (HIL) environments , with a focus on the security and integrity of communication systems. The paper provides a fundamental understanding of designing and verifying secure communication architectures for the next generation of vehicles.
|
| 09:30 - 11:00 Radovi |
D. Miljković (Hrvatska elektroprivreda d.d., Zagreb, Croatia) Distributed Chips Architecture and Stacked Wafer Arrays: Paths to Exascale Computing 
Monolithic chip scaling limits have driven interest in 3D-organized distributed small chips and vertically stacked wafer arrays for scalable high-performance computing. This paper examines design approaches that leverage extreme parallelism, simplified thermal paths, and redundancy to achieve exascale performance in a compact footprint. It addresses fabrication challenges, interconnect strategies, and trade-offs between power efficiency and computational density. These architectures show promise for distributed intelligence workloads, data center optimization, and AI-specific hardware platforms.
|
D. Miljković (Hrvatska elektroprivreda d.d., Zagreb, Croatia) Large Language Models for Automated Motor Current Signature Analysis 
Motor Current Signature Analysis is a critical non-invasive technique for detecting mechanical and electrical faults in induction motors. Traditional MCSA requires expert knowledge to interpret frequency spectrum patterns and identify characteristic fault signatures. This paper explores the application of Large Language Models as intelligent diagnostic assistants for MCSA interpretation. Modern LLMs can effectively analyze motor current spectrums, identify fault-related frequency components, and classify common motor faults including bearing defects, broken rotor bars, eccentricity, and misalignment. The system accepts frequency spectrum images or raw data along with motor specifications, then applies reasoning to detect anomalies based on established diagnostic criteria. Results show that LLMs can successfully identify characteristic fault frequencies, distinguish between multiple simultaneous faults, and provide diagnostic explanations that align with expert analysis. This approach has potential to democratize MCSA by making expert-level diagnostic capabilities accessible to maintenance personnel with limited MCSA experience, while also serving as a decision support tool for experienced analysts. The integration of LLMs into predictive maintenance workflows represents a promising direction for industrial diagnostics.
|
G. Golob, M. Rot, G. Kosec (Jožef Štefan Institute, Ljubljana, Slovenia) Investigating Mixed-Precision Strategies for GPU-Accelerated RBF-FD Solvers 
Radial basis function-generated finite difference (RBF-FD) methods provide a flexible meshless discretization framework for solving partial differential equations on complex geometries. For explicit time integration schemes, the most compute intensive steps of the solution procedure consist of matrix-vector operations, which are well suited for parallel execution. On modern GPU architectures, however, throughput is often limited by the choice of numerical precision. The numerical sensitivity of the RBF-FD method’s components is studied in order to identify opportunities for safely employing reduced precision. The impact of mixed-precision computation is evaluated through numerical experiments, with the viscous Burgers’ equation serving as a benchmark for assessing accuracy, convergence, and performance. The solver is implemented in SYCL with an emphasis on efficient GPU execution. We employ GPU profiling to analyze memory access patterns and hardware utilization, offering further insight into the trade-offs between numerical accuracy and performance on GPU-based systems.
|
D. Borissova (Institute of Information and Communication Technologies at the Bulgarian Academy of Sciences, Sofia, Bulgaria), N. Garvanov (University of Library Studies and Information Technologies, Sofia, Bulgaria), A. Angelov (Institute of Information and Communication Technologies at the Bulgarian Academy of Sciences, Sofia, Bulgaria) Comparisons of Solutions for Optimizing in Solar Photovoltaic Systems 
With the continued development of renewable energy, solar power generation systems are widely used worldwide. In these systems, the inverter plays a vital role. It is responsible not only for converting the direct current generated by solar panels into alternating current for use in homes or power grids, but also plays a key role in the system's efficiency, stability, and safety. Given the need to improve the efficiency of solar photovoltaic systems, this study analyzes the factors that affect their productivity. A comparative analysis of the types of inverters and their application possibilities is made. In conclusion, some points to consider when choosing an inverter are indicated.
|
| 11:00 - 11:20 Pauza |
| 11:20 - 13:00 Radovi |
D. Legan, I. Humar (University of Ljubljana, Ljubljana, Slovenia) Climatic and Geospatial Assessment of Solar PV Systems for Remote Mobile Base Stations Using PVsyst and QGIS 
The increasing energy demand of mobile telecommunication base stations, particularly in remote areas, makes reliable and sustainable power supply a key challenge for modern networks. Solar photovoltaic (PV) systems represent a promising solution, yet their performance strongly depends on local climatic conditions, temperature and system configuration. This paper presents a comparative analysis of solar powered supply solutions for remote mobile base stations across different climatic regions.
The study combines PV system simulations performed in PVsyst with geospatial analysis using QGIS, enabling a location dependent evaluation of energy yield, temperature effects and daily production profiles. Several representative regions were selected to reflect diverse environmental conditions and realistic base station load scenarios are considered based on recent literature. Simulation results are processed and analyzed using Python, allowing a detailed comparison of annual energy production, performance deviations and operational reliability.
The results highlight significant regional differences in PV performance, mainly driven by solar irradiance and ambient temperature and identify the most suitable environments for standalone or hybrid PV-based power supply. The paper also discusses the potential role of energy storage and advanced forecasting methods as future extensions. Overall, the presented approach demonstrates a practical framework for assessing and optimizing renewable power solutions for sustainable telecommunication infrastructure.
|
L. Giacomossi, A. Haglund, Y. Beyene, E. Zainali, C. Namatovu, E. Målqvist, B. Cürüklü, I. Tomasic, H. Forsberg (Mälardalen University, Västerås, Sweden) Market-Based Replanning for Safety-Critical UAV Swarms in Search and Rescue Missions 
Reliable autonomous UAV swarms in Search and Rescue (SAR) missions require fault-tolerant coordination capable of sustaining operations despite agent degradation. This paper introduces the Intelligent Replanning Drone Swarm, a distributed coordination architecture designed for resource-constrained environments. The proposed framework employs a Reverse-Auction market mechanism where agents bid to service search sectors based on a distance-weighted cost function, coupled with a geometric consensus protocol for target verification. We evaluate the approach through physics-based simulations (N = 8 agents, 8×8 grid) subjected to stochastic fault injection. Results indicate that the swarm autonomously reallocates tasks from failed agents with a latency negligible relative to the total mission duration, maintaining a mission success rate of 93% under 25% workforce degradation. The proposed framework demonstrates a robust, empirically validated method for self-healing aerial robotic coordination.
|
S. Annang (GCTU, Accra, Ghana), D. Gookyi (CSIR-INSTI, Accra, Ghana), S. Danso (GCTU, Accra, Ghana), M. Dziwornu, M. Araphat, Y. Barimah (CSIR-INSTI, Accra, Ghana) A Comprehensive Survey of Low-Cost IoT-Based Soil Monitoring Systems for Precision Agriculture 
Soil quality assessment is crucial for improving crop productivity and promoting sustainable agriculture, yet traditional laboratory testing is costly, time-consuming, and often inaccessible to smallholder farmers. This survey reviews advances in low-cost real-time soil monitoring systems for infield deployment from 2020 to 2025. It synthesizes research on soil sensors for moisture, pH, and NPK nutrients; hardware such as microcontrollers and power systems; data acquisition and visualization platforms; and machine learning models for decision support. Emphasis is placed on Internet of Things (IoT) architectures, edge computing, and affordable sensors that reduce reliance on laboratories and continuous internet connectivity. Findings show that while sensing and analytics technologies are mature, low-cost constraints limit system integration, calibration stability, and adaptability, directly affecting reliability and adoption. Most systems monitor only a few soil parameters; fewer than 30% implement edge intelligence, and long-term field calibration is rare. This review identifies gaps and provides insights for comprehensive, low-cost, farmer-centric soil monitoring solutions.
|
M. Hasan, A. Khan, M. Saari, V. Bankhele, P. Abrahamsson (Tampere University, Pori, Finland) Towards AI Evaluation in Domain-Specific RAG Systems: The AgriHubi Case Study 
Large language models show promise for knowledge-intensive domains, yet their use in agriculture is constrained by weak grounding, English-centric training data, and limited real-world evaluation. These issues are amplified for low-resource languages, where high-quality domain documentation exists but remains difficult to access through general-purpose models. This paper presents AgriHubi, a domain-adapted retrieval-augmented generation (RAG) system for Finnish-language agricultural decision support. AgriHubi integrates Finnish agricultural documents with open PORO family models and combines explicit source grounding with user feedback to support iterative refinement. Developed over eight iterations and evaluated through two user studies, the system shows clear gains in answer completeness, linguistic accuracy, and perceived reliability. The results also reveal practical trade-offs between response quality and latency when deploying larger models. This study provides empirical guidance for designing and evaluating domain-specific RAG systems in low-resource language settings.
|
J. Vehovar, M. Rot (Institut "Jožef Stefan", Jožef Stefan International Postgraduate School, Ljubljana, Slovenia), M. Depolli, G. Kosec (Institut "Jožef Stefan", Ljubljana, Slovenia) Load Balanced Parallel Node Generation for Meshless Numerical Methods 
Meshless methods are used to solve partial differential equations by approximating differential operators at a node as a weighted sum of values at its neighbours. One of the algorithms for generating nodes suitable for meshless numerical analysis is an n-dimensional Poisson disc sampling based method. It can handle complex geometries and supports variable node density, a crucial feature for adaptive analysis. We modify this method for parallel execution using coupled spatial indexing and work distribution hypertrees. The latter is prebuilt according to the node density function, ensuring that each leaf represents a balanced work unit. Threads advance separate fronts and claim work hypertree leaves as needed while avoiding leaves neighbouring those claimed by other threads. Node placement constraints and the partially prebuilt spatial hypertree are combined to eliminate the need to lock the tree while it is being modified. Thread collision handling is managed by the work hypertree at the leaf level, drastically reducing the number of required mutex acquisitions for point insertion collision checks. We explore the behaviour of the proposed algorithm and compare the performance with existing attempts at parallelisation and consider the requirements for adapting the developed algorithm to distributed systems.
|
| 13:00 - 15:00 Pauza za ručak |
| 15:00 - 16:40 Radovi |
N. Suvitie, M. Saari, P. Abrahamsson (Tampere University, Pori, Finland) From PDF to Dataset: Semi-Automated Extraction of Fine-Tuning Data 
Preparing fine-tuning datasets for large language models (LLMs) commonly involves substantial manual effort, particularly in extracting, structuring, and validating data from unstructured sources. This study proposes a semi-automated, human-in-the-loop approach for generating fine-tuning question–answer (QA) pairs from PDF documents. The research investigates how unstructured textual content can be systematically transformed into validated QA data suitable for fine-tuning, while mitigating the risks associated with hallucinated or low-quality model outputs. The proposed system consists of a web-based architecture combining a React frontend with a Flask backend interfacing with the OpenAI API. Users provide a PDF document and a target page range, after which the system extracts text and generates candidate QA pairs. These candidates are presented for manual inspection, filtering, and refinement, prior to export in a structured JSON format compatible with fine-tuning pipelines. The results indicate that the proposed approach reduces the effort required for manual dataset construction while preserving data quality through mandatory human validation. The study highlights the effectiveness of hybrid automation workflows in accelerating fine-tuning dataset preparation without compromising reliability, and contributes design insights for human-centered tools supporting LLM customization.
|
T. Brajko, S. Požgaj, A. Kurdija, K. Vladimir, G. Delač, M. Šilić (University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia) Interactive Visualizer for Neural Algorithmic Reasoners 
Recent advances in machine learning have enabled neural networks to approximate the behavior of classical algorithms and solve structured computational tasks. Neural Algorithmic Reasoning (NAR) formalizes this setting by training models to execute algorithms step by step, as captured by the CLRS-30 benchmark. Despite solid quantitative results, the internal reasoning processes of such models remain difficult to inspect. This paper presents an interactive visualization system for Neural Algorithmic Reasoners on the CLRS-30 benchmark. The system supports all CLRS-30 algorithms and provides a unified interface for exploring complete execution traces, including intermediate algorithmic states and latent space representations produced by the neural model. Users can control the execution timeline, compare predicted and ground-truth trajectories, and inspect algorithm-specific signals at each step. We demonstrate the usefulness of the visualizer on several representative examples, showing how it enables qualitative analysis of learned reasoning behavior and helps identify systematic model errors not visible from final accuracy metrics alone.
|
N. Hlupić, P. Kapec (University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia), A. Lučić (Rain Technologies d.o.o., Zagreb, Croatia), D. Čakija (University of Zagreb, Faculty of Transport and Traffic Sciences, Zagreb, Croatia) Partial Parallelization of MinSum Algorithm for System-Optimal Traffic Assignment 
Traffic flow optimization has been an intensively studied problem in past decades, mainly with two principal goals, user equilibrium and system-optimality. Over time, numerous algorithms have been developed based on both mathematical optimizations and heuristic methods, but regardless of the operating principle and strategy, they all have to repeatedly calculate the total travel time (TTT) in the city, which is one of the most time-consuming and influential operations in terms of the total execution time. On the other hand, contemporary technology provides affordable multicore processors and program parallelization, particularly suitable for calculation of the TTT in the entire city because calculation of the total reduces to the summation of driving times in different parts of the city, and all these summations are mutually independent. This article, therefore, analyzes the time savings achievable by parallelizing the calculation of the TTT in one of the fastest algorithms for city-level system-optimal traffic assignment. It is shown that with appropriate knowledge and minimal program adjustments, the acceleration can be multiple even on general-purpose computers, which is why advanced large-scale traffic management is no longer an unreachable aim.
|
A. DAMJANOVSKI, V. Zdraveski (Faculty of Computer Science and Engineering, Skopje, Macedonia) Safety-Prioritized Navigation: Route Recommendation Using Parallel K-means Clustering 
Recent advancements in navigation systems emphasize the need to incorporate safety considerations alongside traditional metrics like travel time. This study introduces an approach to safety-prioritized route recommendation by implementing parallel nested K-means clustering, enhanced through GPU-based parallel processing on CUDA. Utilizing geospatial datasets from New York City, which contains motor vehicle collisions, crime complaints, and shooting incidents, the model categorizes data points into accident-related and criminal groups, assigning each a weighted safety score based on severity. Optimal clustering configurations are determined using the Root Mean Square Error (RMSE). Performance evaluations demonstrate that GPU-parallelized clustering substantially outperforms traditional CPU-based methods, achieving significant speed improvements, particularly with larger datasets. Although data fetching from external APIs constitutes the major runtime bottleneck, the clustering phase itself is highly optimized, processing datasets up to 10 million records within seconds. Safety scores are computed at regular waypoints along potential routes using a K-Nearest Neighbors regression model, enabling detailed safety assessments of individual paths. Comparative analysis against standard fastest-route recommendations reveals that the safety-prioritized routes identified by the proposed model effectively avoid high-risk areas. This approach underscores the importance of integrating comprehensive safety considerations into modern navigation solutions, with significant applications for travelers and commercial fleet management.
|
R. Domović, M. Rehak, B. Cafuta (Zagreb University of Applied Sciences, Zagreb , Croatia) Implementation Practices of HTTP Security Headers on European News Portals 
News portals are an important part of the digital public space because they have high traffic, public visibility, substantial advertising space, and they rely on the trust of their users. This is why implementing basic web security mechanisms on such portals is important. This paper presents a comparative analysis of the implementation of essential HTTP security headers on the most visited news portals across Europe. The analysis is based on a dataset of 225 news portals from 45 European countries, categorized into EU member states and non-EU countries. Data were collected in March 2025 using a custom Python script developed for automated analysis of HTTP response headers, focusing on HTTP Strict-Transport-Security (HSTS), Content-Security-Policy (CSP), Referrer-Policy, and Permissions-Policy. The results show marked differences in the adoption of security headers between the two groups. Portals from EU member states more often apply basic HTTP security headers compared to portals outside the EU, where the difference is especially pronounced when HSTS headers are applied. The findings indicate gaps in the practical implementation of web security and suggest the need for improved baseline security configurations for news portals.
|
| 16:40 - 17:00 Pauza |
| 17:00 - 19:00 Radovi |
Y. Watashiba (The University of Osaka, Suita, Japan), S. Matsui, S. Date (The University of Osaka, Ibaraki, Japan) Cost Conscious Algorithm for Compute Node Shutdown in Demand-Response-Oriented Resource Management System 
Demand response programs maintain grid balance during supply constraints by asking consumers to reduce electricity use. Since HPC centers consume large amounts of power, their participation is desirable. However, common power capping techniques in operations of computing resources decrease service quality. It makes participation difficult. To address this gap, we have been developing the demand-response-oriented resource management system for HPC centers. When submitting a job, users declare whether they can tolerate performance impacts caused by power reduction. The job scheduler then prioritizes noncooperative jobs and confines reduction effects to cooperative jobs as much as possible. To encourage cooperative jobs, the system pays rewards to incentivize operational cooperation through cooperative job submissions. Such resource management must therefore optimize both power reduction and incentive cost. This paper focuses on computer cluster system operations that enforce an upper power bound by shutting down compute nodes, a widely used method in practice. We proposes the Cost Conscious Algorithm, which determines which compute nodes to shut down to keep power consumption under the upper power limit considering both job impacts and incentive costs. Simulation results using DRJSS show that the proposed CCA consistently reduces the total payment from the HPC center to users compared with the baseline algorithm, while also decreasing the number of non-cooperative jobs whose waiting time increases.
|
A. Periola (Cape Peninsula University of Technology , Cape Town, South Africa), A. Alonge (Tshwane University of Technology, Pretoria, South Africa), K. Ogudo (University of Johannesburg, Johannesburg, South Africa) Distributed Underwater Data Center Architecture for Cost and Uptime Optimization 
Data centers host a significant number of servers that play a crucial role in executing algorithms associated with large language model (LLM) based and driven services such as Agentic artificial intelligence (Agentic AI). LLM services are executed aboard terrestrial data centers (TDCs) servers. However, it is challenging to establish TDCs in coastal regions due to the high land reclamation costs. This challenge can be addressed by using underwater data centres (UDCs). It is also important to ensure that UDCs are deployed in a manner that ensures the realization of a long operational duration i.e. uptime. The presented research proposes the use of UDCs to reduce land reclamation costs (LRCs). It also proposes the logical clustering of multiple UDCs with the goal of enhancing functional probability and operational resilience. This improves the uptime. The performance evaluation focuses on investigating the LRCs and uptime. Analysis shows that the introduction of UDCs reduces the LRC by an average of (8.5 – 36)%. The use of the clustered approach and architecture for UDCs instead of the existing approach of single and standalone UDCs for Agentic AI service execution also enhances the uptime by an average of (59.4 – 62.7)%.
|
N. Hristovski, M. Gusev, D. Mileski (University Sts Cyril and Methodius, Skopje, Macedonia) A Case Study of Scalability for PaaS, CaaS, and FaaS Applications 
In this paper, we address the scalability characteristics of cloud services for CPU-intensive applications. We focus on three paradigms offered by Amazon Web Services (AWS): the Function-as-a-Service model via AWS Lambda, the Containeras-a-Service model via Amazon Elastic Container Service, and the Platform-as-a-Service model via AWS Elastic Beanstalk. Convolution, as one of the most widely used algorithms, is a computationally intensive operation used in experiments to evaluate various deployments in terms of scaling latency, processing speedup, and overall availability in CPU-intensive workloads. The goal is to provide insights into their relative performance and suitability for computationally demanding applications. The experiments showed that the AWS Elastic Beanstalk service offers no advantage over the other two services in terms of reliability or fast scaling for CPU-intensive applications. The AWS ECS service demonstrated the most efficient resource utilization and achieved the highest request rate per minute per instance under manual scaling. The AWS Lambda service achieved the highest speedup with automated scaling and full-time availability.
|
M. Kostoska, N. Hristovski (Ss Cyril and Methodius University in Skopje, Skopje, Macedonia) Towards Sustainable Kubernetes: Leveraging KEDA and Knative for Zero-Instance Scaling 
As cloud-native systems continue to grow, energy efficiency and cost reduction have become critical concerns within modern computing environments. This work explores the role of autoscaling mechanisms in contributing to sustainable and resource-efficient cloud usage, with a particular focus on scale-to-zero behavior. Two Kubernetes-based solutions—KEDA and Knative—are examined and deployed in order to evaluate their capabilities, configuration models, and performance characteristics. Through practical experimentation and comparative analysis, the results demonstrate that both tools can successfully scale workloads to zero when idle, significantly reducing unnecessary resource consumption. KEDA offers broad event-driven scaling and straightforward integration with standard Kubernetes workloads, whereas Knative provides a serverless-oriented model optimized for HTTP traffic through service mesh support. The findings confirm that intelligent autoscaling can support sustainable IT goals and enable cost-efficient cloud operations, providing guidance for selecting appropriate tools based on architectural needs and system requirements.
|
P. Iseni, J. Ajdari, X. Zenuni, M. Hamiti (South East European University, Tetovo, Macedonia) Optimizing MLaaS Platform Selection: A Performance, Cost, and Security Analysis of AWS, Google Cloud, and Microsoft Azure 
This paper provides a comparative study on scalability, usability, and other aspects of Machine Learning as a Service (MLaaS) provided by major cloud providers such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Most existing studies examine MLaaS performance, cost, or security in isolation rather than adopting an integrated perspective. We conduct a systematic literature review on MLaaS and investigate the primary trade-offs among the three providers, as well as gaps in their comparative evaluation. Our preliminary findings indicate that AWS outperforms the other platforms in terms of ecosystem maturity and flexibility, GCP demonstrates strong performance for AI-optimized workloads, and Azure excels in enterprise integration and compliance services. Furthermore, our study finds that implementing various cost optimization strategies can reduce costs by approximately 15%–67%, but additional effort is required to select the right pricing models and avoid potential hidden fees. To support platform selection, based on workload characteristics, organizational priorities, and tolerance for operational and security risks, we combined literature-driven insights with controlled experiments and evaluated training speed, inference latency, and cost efficiency. Furthermore, the findings of this study provide a useful reference for researchers conducting future benchmarking studies on MLaaS platforms.
|
|
Osnovni podaci:
Voditelji:
Vlado Sruk (Croatia), Dejan Škvorc (Croatia)
Programski odbor:
Goran Delač (Croatia), Željko Hocenski (Croatia), Leonardo Jelenković (Croatia), Hrvoje Mlinarić (Croatia), Adrian Satja Kurdija (Croatia), Vlado Sruk (Croatia), Marin Šilić (Croatia), Dejan Škvorc (Croatia), Klemo Vladimir (Croatia)
Prijava/Kotizacija:
|
PRIJAVA / KOTIZACIJE
|
CIJENA U EUR-ima
|
|
Do 15.5.2026.
|
Od 16.5.2026.
|
| Članovi IEEE |
297 |
324 |
| Članovi MIPRO |
297
|
324
|
| Studenti (preddiplomski i diplomski studij) te nastavnici osnovnih i srednjih škola |
165
|
180
|
| Ostali |
330
|
360
|
Studentski popust se ne odnosi na studente doktorskog studija.
OBAVIJEST AUTORIMA: Uvjet za objavu rada je plaćanje najmanje jedne kotizacije po radu. Autorima 2 ili više radova, ukupna se kotizacija umanjuje za 10.
Kontakt:
Vlado Sruk
Fakultet elektrotehnike i računarstva
Unska 3
10000 Zagreb, Hrvatska
Tel.: +385 1 612 99 45
Faks: +385 1 612 96 53
E-mail: vlado.sruk@fer.hr
Dejan Škvorc
Fakultet elektrotehnike i računarstva
Unska 3
10000 Zagreb, Hrvatska
Tel.: +385 1 612 99 43
Faks: +385 1 612 96 53
E-mail: dejan.skvorc@fer.hr
Najbolji radovi bit će nagrađeni.
Prihvaćeni radovi bit će objavljeni u zborniku radova s ISSN brojem. Radovi na engleskom jeziku prezentirani na skupu bit će poslani za uključenje u digitalnu bazu IEEE Xplore.

Mjesto održavanja:

Opatija je vodeće ljetovalište na istočnoj strani Jadrana i jedno od najpoznatijih na Mediteranu. Ovaj grad aristokratske arhitekture i stila već više od 180 godina privlači svjetski poznate umjetnike, političare, kraljeve, znanstvenike, sportaše, ali i poslovne ljude, bankare, menadžere i sve kojima Opatija nudi svoje brojne sadržaje.
Opatija svojim gostima nudi brojne komforne hotele, odlične restorane, zabavne sadržaje, umjetničke festivale, vrhunske koncerte ozbiljne i zabavne glazbe, uređene plaže i brojne bazene i sve što je potrebno za ugodan boravak gostiju različitih afiniteta.
U novije doba Opatija je jedan od najpoznatijih kongresnih gradova na Mediteranu, posebno prepoznatljiva po međunarodnim ICT skupovima MIPRO koji se u njoj održavaju od 1979. godine i koji redovito okupljaju preko tisuću sudionika iz četrdesetak zemalja. Ovi skupovi Opatiju promoviraju u nezaobilazan tehnološki, poslovni, obrazovni i znanstveni centar jugoistočne Europe i Europske unije općenito.
Detaljnije informacije se mogu potražiti na www.opatija.hr i www.visitopatija.com.
|
|