<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://www.w3.org/2005/Atom">
<title>Fakulta aplikované informatiky</title>
<link href="http://hdl.handle.net/10563/1001724" rel="alternate"/>
<subtitle/>
<id>http://hdl.handle.net/10563/1001724</id>
<updated>2026-04-06T08:21:51Z</updated>
<dc:date>2026-04-06T08:21:51Z</dc:date>
<entry>
<title>SD-LSTM: A novel semi–decentralized LSTM architecture for scalable and accurate stock price prediction</title>
<link href="http://hdl.handle.net/10563/1012728" rel="alternate"/>
<author>
<name>Li, Peng</name>
</author>
<author>
<name>Šenkeřík, Roman</name>
</author>
<author>
<name>Komínková Oplatková, Zuzana</name>
</author>
<id>http://hdl.handle.net/10563/1012728</id>
<updated>2026-03-26T13:13:50Z</updated>
<published>2026-01-01T00:00:00Z</published>
<summary type="text">SD-LSTM: A novel semi–decentralized LSTM architecture for scalable and accurate stock price prediction
Li, Peng; Šenkeřík, Roman; Komínková Oplatková, Zuzana
This study introduces a novel Semi-Decentralized Long Short-Term Memory (SD-LSTM) architecture and compares its performance against a traditional LSTM model for stock price prediction, examining both accuracy and training time. All experiments employ canonical settings. Results indicate that SD-LSTM consistently achieves better prediction accuracy—evidenced by significantly lower mean squared error—across stock data from 5 major U.S. companies (Apple, NVIDIA, Amazon, Alphabet, Microsoft). Moreover, SD-LSTM accomplishes these improvements with fewer parameters. In terms of training speed, SD-LSTM is substantially faster than traditional LSTM when handling larger datasets and more complex configurations, highlighting its efficiency in parallel processing. Overall, these findings underscore the potential of this new SD-LSTM architecture for large-scale applications and its viability for integration into both established and emerging hybrid approaches that demand advanced predictive accuracy and computational efficiency.
</summary>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating NLP tools for AI in software requirements analysis</title>
<link href="http://hdl.handle.net/10563/1012726" rel="alternate"/>
<author>
<name>Okechukwu, Cornelius Chimuanya</name>
</author>
<author>
<name>Šilhavý, Radek</name>
</author>
<author>
<name>Šilhavý, Petr</name>
</author>
<id>http://hdl.handle.net/10563/1012726</id>
<updated>2026-02-17T12:10:05Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Evaluating NLP tools for AI in software requirements analysis
Okechukwu, Cornelius Chimuanya; Šilhavý, Radek; Šilhavý, Petr
Software requirements analysis is increasingly automated by applying natural language processing (NLP) tools, enhancing efficiency and precision. This research employs the Mendeley FR_NFR dataset to evaluate the classification of functional requirements (FR) and non-functional requirements (NFR) utilising three NLP tools: NLTK, OpenAI, and spaCy. The evaluation uses performance indicators like F1-score, recall, accuracy, precision, and confusion matrices. OpenAI is a good option for high-stakes applications because of its 94% F1 score and exceptional accuracy, even with the related API expenses. With 83% accuracy and 0.1 s per query, SpaCy is ideal for real-time applications because it balances speed and efficiency. With its 68% accuracy rate, NLTK’s rule-based methodology is still a viable choice for prototyping or in controlled settings where transparency is crucial. With an average accuracy of 92%, the results show that OpenAI’s transformer-based model performs better than NLTK and spaCy, even though spaCy has an advantage in entity recognition. This study provides practitioners with critical insights by elucidating the trade-offs between accuracy, interpretability, and computational efficiency.
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A comparative evaluation of validation techniques in software effort estimation using eSOMCOCOMO</title>
<link href="http://hdl.handle.net/10563/1012727" rel="alternate"/>
<author>
<name>Bajusová, Darina</name>
</author>
<author>
<name>Šilhavý, Radek</name>
</author>
<author>
<name>Šilhavý, Petr</name>
</author>
<id>http://hdl.handle.net/10563/1012727</id>
<updated>2026-02-17T12:10:05Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">A comparative evaluation of validation techniques in software effort estimation using eSOMCOCOMO
Bajusová, Darina; Šilhavý, Radek; Šilhavý, Petr
This study investigates the impact of different validation techniques on the performance evaluation of software effort estimation models. Specifically, it compares k-fold cross-validation, leave-one-out cross-validation (LOOCV), and hold-out validation using the eSOMCOCOMO approach, which enhances COCOMO model predictions through the Self-Organizing Migrating Algorithm (SOMA). The evaluation was conducted on three benchmark datasets (NASA18, Kemerer, and Miyazaki94) and assessed using standard evaluation metrics (MMRE, PRED(25), MMER, MAE, MSE, RMSE, and R2). Statistical hypothesis testing revealed significant differences among most validation techniques, except in the comparison conducted on the NASA18 dataset. LOOCV demonstrates superior stability across multiple runs, whereas hold-out validation showed high variance.
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The pilot study of B-mode image analysis of kidney diseases</title>
<link href="http://hdl.handle.net/10563/1012729" rel="alternate"/>
<author>
<name>Blahuta, Jiří</name>
</author>
<author>
<name>Pavlík, Lukáš</name>
</author>
<author>
<name>Soukup, Tomáš</name>
</author>
<author>
<name>Kozel, Jiří</name>
</author>
<id>http://hdl.handle.net/10563/1012729</id>
<updated>2026-02-17T12:10:05Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">The pilot study of B-mode image analysis of kidney diseases
Blahuta, Jiří; Pavlík, Lukáš; Soukup, Tomáš; Kozel, Jiří
Ultrasound diagnostics is a key tool in the investigation of renal diseases in pediatric patients due to its non-invasiveness and accessibility. Although it offers many advantages. Its accuracy in detecting functional pathological changes is investigated in this article. This pilot study included 17 children aged 1 to 18 years who underwent renal ultrasound examination followed by scintigraphy examination in a nuclear medicine clinic. Renal ultrasound images were analyzed using the digital B-MODE Assist system. which calculates renal echogenicity. The B-MODE Assist software was developed more than 10 years ago and is used for measuring region of interest echogenicity for B-mode images in medicine. Echogenicity results were then compared with relative renal function determined using nuclear medicine methods. Results using the B-MODE Assist system demonstrated the sensitivity of 57–100 % and the specificity of 70–100 % in predicting renal pathological findings using ultrasound echogenicity compared with nuclear medicine methods.
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
</feed>
