Digitala Vetenskapliga Arkivet

Change search
Refine search result
1234567 1 - 50 of 4966
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abbas, Gulfam
    et al.
    Blekinge Institute of Technology, School of Computing.
    Asif, Naveed
    Blekinge Institute of Technology, School of Computing.
    Performance Tradeoffs in Software Transactional Memory2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Transactional memory (TM), a new programming paradigm, is one of the latest approaches to write programs for next generation multicore and multiprocessor systems. TM is an alternative to lock-based programming. It is a promising solution to a hefty and mounting problem that programmers are facing in developing programs for Chip Multi-Processor (CMP) architectures by simplifying synchronization to shared data structures in a way that is scalable and compos-able. Software Transactional Memory (STM) a full software approach of TM systems can be defined as non-blocking synchronization mechanism where sequential objects are automatically converted into concurrent objects. In this thesis, we present performance comparison of four different STM implementations – RSTM of V. J. Marathe, et al., TL2 of D. Dice, et al., TinySTM of P. Felber, et al. and SwissTM of A. Dragojevic, et al. It helps us in deep understanding of potential tradeoffs involved. It further helps us in assessing, what are the design choices and configuration parameters that may provide better ways to build better and efficient STMs. In particular, suitability of an STM is analyzed against another STM. A literature study is carried out to sort out STM implementations for experimentation. An experiment is performed to measure performance tradeoffs between these STM implementations. The empirical evaluations done as part of this thesis conclude that SwissTM has significantly higher throughput than state-of-the-art STM implementations, namely RSTM, TL2, and TinySTM, as it outperforms consistently well while measuring execution time and aborts per commit parameters on STAMP benchmarks. The results taken in transaction retry rate measurements show that the performance of TL2 is better than RSTM, TinySTM and SwissTM.

    Download full text (pdf)
    FULLTEXT01
  • 2.
    Abbas, Mhd Yazan
    et al.
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013). Karlstad University, Faculty of Economic Sciences, Communication and IT, Department of Computer Science.
    Ferzali, Ali
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013). Karlstad University, Faculty of Economic Sciences, Communication and IT, Department of Computer Science.
    A mobile Application development in the Android Studio platform for leave applications2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Nowadays, there is a mobile application for almost every need, for instance, training, games, and news applications. Almost all the companies have also their own mobile applications that facilitate the daily communication between their employees and supervisors. In this thesis, the development process of an Android application for the company Askås is created, where the employees can for example send a Flex application or sick leave application, see their registered personal information and edit them. The final result of the project is almost what Askås asked for, An Android application that facilitates their employees’ daily needs. The development process, the results, and the encountered problems are also represented in a more detailed manner in the thesis. 

  • 3.
    Abbas, Nadeem
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Designing Self-Adaptive Software Systems with Reuse2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Modern software systems are increasingly more connected, pervasive, and dynamic, as such, they are subject to more runtime variations than legacy systems. Runtime variations affect system properties, such as performance and availability. The variations are difficult to anticipate and thus mitigate in the system design.

    Self-adaptive software systems were proposed as a solution to monitor and adapt systems in response to runtime variations. Research has established a vast body of knowledge on engineering self-adaptive systems. However, there is a lack of systematic process support that leverages such engineering knowledge and provides for systematic reuse for self-adaptive systems development. 

    This thesis proposes the Autonomic Software Product Lines (ASPL), which is a strategy for developing self-adaptive software systems with systematic reuse. The strategy exploits the separation of a managed and a managing subsystem and describes three steps that transform and integrate a domain-independent managing system platform into a domain-specific software product line for self-adaptive software systems.

    Applying the ASPL strategy is however not straightforward as it involves challenges related to variability and uncertainty. We analyzed variability and uncertainty to understand their causes and effects. Based on the results, we developed the Autonomic Software Product Lines engineering (ASPLe) methodology, which provides process support for the ASPL strategy. The ASPLe has three processes, 1) ASPL Domain Engineering, 2) Specialization and 3) Integration. Each process maps to one of the steps in the ASPL strategy and defines roles, work-products, activities, and workflows for requirements, design, implementation, and testing. The focus of this thesis is on requirements and design.

    We validate the ASPLe through demonstration and evaluation. We developed three demonstrator product lines using the ASPLe. We also conducted an extensive case study to evaluate key design activities in the ASPLe with experiments, questionnaires, and interviews. The results show a statistically significant increase in quality and reuse levels for self-adaptive software systems designed using the ASPLe compared to current engineering practices.

    Download full text (pdf)
    Doctoral Thesis (Comprehensive Summary)
    Download (jpg)
    Front Page
  • 4.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Andersson, Jesper
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Architectural reasoning for dynamic software product lines2013In: Proceedings of the 17th International Software Product Line Conference co-located workshops, ACM Press, 2013, p. 117-124Conference paper (Refereed)
    Abstract [en]

    Software quality is critical in today's software systems. A challenge is the trade-off situation architects face in the design process. Designers often have two or more alternatives, which must be compared and put into context before a decision is made. The challenge becomes even more complex for dynamic software product lines, where domain designers have to take runtime variations into consideration as well. To address the problem we propose extensions to an architectural reasoning framework with constructs/artifacts to define and model a domain's scope and dynamic variability. The extended reasoning framework encapsulates knowledge to understand and reason about domain quality behavior and self-adaptation as a primary variability mechanism. The framework is demonstrated for a self-configuration property, self-upgradability on an educational product-line.

  • 5.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Andersson, Jesper
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    ASPLe: a methodology to develop self-adaptive software systems with reuse2017Report (Other academic)
    Abstract [en]

    Advances in computing technologies are pushing software systems and their operating environments to become more dynamic and complex. The growing complexity of software systems coupled with uncertainties induced by runtime variations leads to challenges in software analysis and design. Self-Adaptive Software Systems (SASS) have been proposed as a solution to address design time complexity and uncertainty by adapting software systems at runtime. A vast body of knowledge on engineering self-adaptive software systems has been established. However, to the best of our knowledge, no or little work has considered systematic reuse of this knowledge. To that end, this study contributes an Autonomic Software Product Lines engineering (ASPLe) methodology. The ASPLe is based on a multi-product lines strategy which leverages systematic reuse through separation of application and adaptation logic. It provides developers with repeatable process support to design and develop self-adaptive software systems with reuse across several application domains. The methodology is composed of three core processes, and each process is organized for requirements, design, implementation, and testing activities. To exemplify and demonstrate the use of the ASPLe methodology, three application domains are used as running examples throughout the report.

    Download full text (pdf)
    ASPLe2017
  • 6.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Andersson, Jesper
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Iftikhar, Muhammad Usman
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Rigorous architectural reasoning for self-adaptive software systems2016In: Proceedings: First Workshop on Qualitative Reasoning abut Software Architectures, QRASA 2016 / [ed] Lisa O'Conner, IEEE, 2016, p. 11-18Conference paper (Refereed)
    Abstract [en]

    Designing a software architecture requires architectural reasoning, i.e., activities that translate requirements to an architecture solution. Architectural reasoning is particularly challenging in the design of product-lines of self-adaptive systems, which involve variability both at development time and runtime. In previous work we developed an extended Architectural Reasoning Framework (eARF) to address this challenge. However, evaluation of the eARF showed that the framework lacked support for rigorous reasoning, ensuring that the design complies to the requirements. In this paper, we introduce an analytical framework that enhances eARF with such support. The framework defines a set of artifacts and a series of activities. Artifacts include templates to specify domain quality attribute scenarios, concrete models, and properties. The activities support architects with transforming requirement scenarios to architecture models that comply to required properties. Our focus in this paper is on architectural reasoning support for a single product instance. We illustrate the benefits of the approach by applying it to an example client-server system, and outline challenges for future work. © 2016 IEEE.

  • 7.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Andersson, Jesper
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Towards Autonomic Software Product Lines (ASPL) - A Technical Report2011Report (Other academic)
    Abstract [en]

    This report describes a work in progress to develop Autonomic Software Product Lines (ASPL). The ASPL is a dynamic software product line approach with a novel variability handling mechanism that enables traditional software product lines to adapt themselves at runtime in response to changes in their context, requirements and business goals. The ASPL variability mechanism is composed of three key activities: 1) context-profiling, 2) context-aware composition, and 3) online learning. Context-profiling is an offline activity that prepares a knowledge base for context-aware composition. The context-aware composition uses the knowledge base to derive a new product or adapts an existing product based on a product line's context attributes and goals. The online learning optimizes the knowledge base to remove errors and suboptimal information and to incorporate new knowledge. The three activities together form a simple yet powerful variability handling mechanism that learns and adapts a system at runtime in response to changes in system context and goals. We evaluated the ASPL variability mechanism on three small-scale software product lines and got promising results. The ASPL approach is, however, is yet at an initial stage and require improved development support with more rigorous evaluation. 

    Download full text (pdf)
    fulltext
  • 8.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Andersson, Jesper
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Katholieke Univ Leuven, Belgium.
    ASPLe: a methodology to develop self-adaptive software systems with systematic reuse2020In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 167, p. 1-19, article id 110626Article in journal (Refereed)
    Abstract [en]

    More than two decades of research have demonstrated an increasing need for software systems to be self-adaptive. Self-adaptation is required to deal with runtime dynamics which are difficult to predict before deployment. A vast body of knowledge to develop Self-Adaptive Software Systems (SASS) has been established. We, however, discovered a lack of process support to develop self-adaptive systems with reuse. To that end, we propose a domain-engineering based methodology, Autonomic Software Product Lines engineering (ASPLe), which provides step-by-step guidelines for developing families of SASS with systematic reuse. The evaluation results from a case study show positive effects on quality and reuse for self-adaptive systems designed using the ASPLe compared to state-of-the-art engineering practices.

  • 9.
    Abdeen, Waleed
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reducing the Distance Between Requirements Engineering and Verification2022Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background Requirements engineering and verification (REV) processes play es-sential roles in software product development. There are physical and non-physicaldistances between entities (actors, artifacts, and activities) in these processes. Cur-rent practices that reduce the distances, such as automated testing and alignmentof document structure and tracing only partially close the above mentioned gap.Objective The aim of this thesis is to investigate solutions w.r.t their abilityto reduce the distances between requirements engineering and verification. Twotechniques that are explored in this thesis are automated testing (model-basedtesting, MBT) and alignment of document structure and tracing (traceability).Method The research methods used in this thesis are systematic mapping, soft-ware requirements mining, case study, literature survey, validation study, and de-sign science.Results MBT and traceability are effective in reducing the distance between re-quirements and verification. However, both activities have some shortcoming thatneeds to be addressed when used for that purpose. Current MBT techniques inthe context of software performance do not attain all the goals of MBT: 1) require-ments validation, 2) checking the testability of requirements, and 3) the generationof an efficient test suite. These goals are essential to reduce the distance. We de-veloped and assessed performance requirements verification and test environmentgeneration approach to tackle these shortcomings. Also, traceability between re-quirements and verification suffers from the low granularity of trace links and doesnot support the verification of all requirements. We propose the use of taxonomictrace links to trace and align the structure of requirements specifications and ver-ification artifacts. The results from the validation study show that the solution isfeasible in practice. However, this comes with challenges that need to be addressed.Conclusion MBT and improved traceability reduce multiple distances betweenactors, artifacts, and activities in the requirements engineering and verificationprocess. MBT is most effective in reducing the distances when the model used isbuilt from the requirements. Traceability is essential in easing access to relevantinformation when needed and should not be seen as an overhead. When creatingtrace links, we need to consider the difference in the abstraction, structure, andtime between the linked artifacts

    Download full text (pdf)
    fulltext
  • 10.
    Abdeen, Waleed
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taxonomic Trace Links Recommender: Context Aware Hierarchical Classification2023In: CEUR Workshop Proceedings / [ed] Ferrari A., Penzenstadler B., Penzenstadler B., Hadar I., Oyedeji S., Abualhaija S., Vogelsang A., Deshpande G., Rachmann A., Gulden J., Wohlgemuth A., Hess A., Fricker S., Guizzardi R., Horkoff J., Perini A., Susi A., Karras O., Dalpiaz F., Moreira A., Amyot D., Spoletini P., CEUR-WS , 2023, Vol. 3378Conference paper (Refereed)
    Abstract [en]

    In the taxonomic trace links concept, the source and target artifacts are connected through knowledge organization structure (e.g., taxonomy). We introduce in this paper a recommender system that recommends labels to requirements artifacts from domain-specific taxonomy to establish taxonomic trace links. The tool exploits the hierarchical nature of taxonomies and uses requirements text and context information as input to the recommender. © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

    Download full text (pdf)
    fulltext
  • 11.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chen, Xingru
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Model-Based Testing for Performance Requirements: A Systematic Mapping Study and A Sample Study2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Model-Based Testing is a method that supports automated test design by using amodel. Although it is adopted in industrial, it is still an open area within performancerequirements. We aim to look into MBT for performance requirements and find out aframework that can model the performance requirements. We conducted a systematicmapping study, after that we conducted a sample study on software requirementsspecifications, then we introduced the Performance Requirements Verification andValidation (PRVV) model and finally, we completed another sample study to seehow the model works in practice. We found that there are many models can beused for performance requirement while the maturity is not enough. MBT can beimplemented in the context of performance, and it has been gaining momentum inrecent years compared to earlier. The PRVV model we developed can verify theperformance requirements and help to generate the test case.

    Download full text (pdf)
    fulltext
  • 12.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chen, Xingru
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An approach for performance requirements verification and test environments generation2023In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 28, no 1, p. 117-144Article in journal (Refereed)
    Abstract [en]

    Model-based testing (MBT) is a method that supports the design and execution of test cases by models that specify theintended behaviors of a system under test. While systematic literature reviews on MBT in general exist, the state of the arton modeling and testing performance requirements has seen much less attention. Therefore, we conducted a systematic map-ping study on model-based performance testing. Then, we studied natural language software requirements specificationsin order to understand which and how performance requirements are typically specified. Since none of the identified MBTtechniques supported a major benefit of modeling, namely identifying faults in requirements specifications, we developed thePerformance Requirements verificatiOn and Test EnvironmentS generaTion approach (PRO-TEST). Finally, we evaluatedPRO-TEST on 149 requirements specifications. We found and analyzed 57 primary studies from the systematic mappingstudy and extracted 50 performance requirements models. However, those models don’t achieve the goals of MBT, whichare validating requirements, ensuring their testability, and generating the minimum required test cases. We analyzed 77 Soft-ware Requirements Specification (SRS) documents, extracted 149 performance requirements from those SRS, and illustratethat with PRO-TEST we can model performance requirements, find issues in those requirements and detect missing ones.We detected three not-quantifiable requirements, 43 not-quantified requirements, and 180 underspecified parameters in the149 modeled performance requirements. Furthermore, we generated 96 test environments from those models. By modelingperformance requirements with PRO-TEST, we can identify issues in the requirements related to their ambiguity, measur-ability, and completeness. Additionally, it allows to generate parameters for test environments

    Download full text (pdf)
    fulltext
  • 13.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chirtoglou, Alexandros
    HOCHTIEF ViCon GmbH, DEU.
    Paul Schimanski, Christoph
    HOCHTIEF ViCon GmbH, DEU.
    Goli, Heja
    HOCHTIEF ViCon GmbH, DEU.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taxonomic Trace Links - Rethinking Traceability and its BenefitsManuscript (preprint) (Other academic)
    Abstract [en]

    Background: Traceability is an important quality of artifacts that are used in knowledge-intensive tasks. When projectbudgets and time pressure are a reality, this leads often to a down-prioritization of creating trace links. Objective:We propose a new idea that uses knowledge organization structures, such as taxonomies, ontologies and thesauri, asan auxiliary artifact to establish trace links. In order to investigate the novelty and feasibility of this idea, we studytraceability in the area of requirements engineering. Method: First, we conduct a literature survey to investigate towhat extent and how auxiliary artifacts have been used in the past for requirements traceability. Then, we conduct avalidation study in industry, testing the idea of taxonomic trace links with realistic artifacts. Results: We have reviewed126 studies that investigate requirements traceability; ninetey-one of them use auxiliary artifacts in the traceabilityprocess. In the validation study, while we have encountered six challenges when classifying requirements with a domain-specific taxonomy, we found that designers and engineers are able to classify design objects comprehensively and reliably.Conclusions: The idea of taxonomic trace links is novel and feasible in practice. However, the identified challenges needto be addressed to allow for an adoption in practice and enable a transfer to software intensive contexts.

  • 14.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chirtoglou, Alexandros
    HOCHTIEF ViCon GmbH, Essen, DEU.
    Challenges of Requirements Communication and Digital Assets Verification in Infrastructure ProjectsManuscript (preprint) (Other academic)
    Abstract [en]

    Context: In infrastructure projects with design-build contracts, the supplier delivers digital assets (e.g., 2D or 3Dmodels) as a part of the design deliverable. These digital assets should align with the customer requirements. Poorrequirements communication between the customer and the supplier is one of the reasons for project overrun. To thebest of our knowledge, no study have yet investigated challenges in requirements communication in the customer-supplierinterface.Objective: In this article, we investigated the processes of requirements validation, requirements communication, anddigital assets verification, and explored the challenges associated with these processes.Methods: We conducted two exploratory case studies. We interviewed ten experts working with digital assets fromthree companies working on two infrastructure projects (road and railway).Results: We illustrate the activities, stakeholders, and artifacts involved in requirements communication, requirementsvalidation, and digital asset verification. Furthermore, we identified 14 challenges (in four clusters: requirements quality,trace links, common requirements engineering (RE), and project management) and their causes and consequences inthose processes.Conclusion: Communication between the client and supplier in sub-contracted work in infrastructure projects is oftenindirect. This puts pressure on the quality of the tender documents (mainly requirements documents) that provides themeans for communication and controls the design verification processes. Hence, it is crucial to ensure the quality of therequirements documents by implementing quality assurance techniques

  • 15.
    Abdiju, Kushtrim
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Exploring a New Way of Food Inventory Management in Households Using Modern Technologies to Reduce Food Waste2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Food waste is becoming an increasing threat to the environment and the economy. Estimates indicate that annually, a third of the food produced around the world ends up being wasted. Only one-fourth of that food is enough to take nearly a billion people out of starvation. Food waste is especially higher in more developed countries, including most of the states in the European Union and the USA. Sector-wise, food is being lost from field to fork, with households topping the charts. Overbuying, not knowing what already is in the fridge, unaware of the food until it eventually expires, are among the most common reasons that contribute to the food waste. The potential prevention of such massive waste could significantly reduce the amount of greenhouse gas emissions around the world and help the economy of the households including all the parties involved in food production, distributing and retailing.

    On the other hand, technology has progressed in very rapid steps. The advancement of AI, ML, IoT, and voice-enabled devices has revolutionized many industries and has made us more efficient as human beings. Unfortunately, these advancements haven't yet had any significant impact in assisting families with their food choices and in preventing them from overbuying and throwing food away. Most of the proposed solutions addressing this issue, do not get integrated into everyday life. That is because they require a lot of manual input, rely entirely on mobile phones, do not show immediate results to keep users motivated, and on top of all, for the sole fact that modern lives are quite complex, and although an important issue, food waste is not an everyday cause of concern of an average person.

    This thesis takes into account all of the shortcomings of the previous works and aims to create a more sustainable solution by exploring new ways of food inventory management in the households by automating the process so that users don't have to manually enter the data themselves. The proposed solution consists of a device that should be easily mounted into any fridge and acts as an interface between users and their food inventory. The device contains a bar-code scanner for the item input and a back-end that is capable of recognizing the item and can in return show user-friendly and valuable information such as the approximate price of the item, the approximate due date etc. and notifies users when an item is about to expire so that they can take appropriate actions.

    7 out of 9 participants in the final conceptual design study said they would use this solution in their homes. The rest of the results from the designed test cases indicate a clear excitement and interest in participants and a willingness to see the prototype in the finished state, all the comments and insights together with the future work and how the feedback will be used into the next iteration are part of the final discussion of this thesis.

    Download full text (pdf)
    kushtrim-abdiu-thesis
  • 16.
    Abdulla, Parosh
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Aronis, Stavros
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Jonsson, Bengt
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Sagonas, Konstantinos
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Optimal dynamic partial order reduction2014In: Proc. 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, New York: ACM Press, 2014, p. 373-384Conference paper (Refereed)
    Abstract [en]

    Stateless model checking is a powerful technique for program verification, which however suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR). We present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, which replace the role of persistent sets in previous algorithms. First, we show how to modify an existing DPOR algorithm to work with source sets, resulting in an efficient and simple to implement algorithm. Second, we extend this algorithm with a novel mechanism, called wakeup trees, that allows to achieve optimality. We have implemented both algorithms in a stateless model checking tool for Erlang programs. Experiments show that source sets significantly increase the performance and that wakeup trees incur only a small overhead in both time and space.

  • 17.
    Abdulla, Parosh
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Aronis, Stavros
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Jonsson, Bengt
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Sagonas, Konstantinos
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Source Sets: A Foundation for Optimal Dynamic Partial Order Reduction2017In: Journal of the ACM, ISSN 0004-5411, E-ISSN 1557-735X, Vol. 64, no 4, article id 25Article in journal (Refereed)
    Abstract [en]

    Stateless model checking is a powerful method for program verification that, however, suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR), an algorithm originally introduced by Flanagan and Godefroid in 2005 and since then not only used as a point of reference but also extended by various researchers. In this article, we present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, that replace the role of persistent sets in previous algorithms. We begin by showing how to modify the original DPOR algorithm to work with source sets, resulting in an efficient and simple-to-implement algorithm, called source-DPOR. Subsequently, we enhance this algorithm with a novel mechanism, called wakeup trees, that allows the resulting algorithm, called optimal-DPOR, to achieve optimality. Both algorithms are then extended to computational models where processes may disable each other, for example, via locks. Finally, we discuss tradeoffs of the source-and optimal-DPOR algorithm and present programs that illustrate significant time and space performance differences between them. We have implemented both algorithms in a publicly available stateless model checking tool for Erlang programs, while the source-DPOR algorithm is at the core of a publicly available stateless model checking tool for C/pthread programs running on machines with relaxed memory models. Experiments show that source sets significantly increase the performance of stateless model checking compared to using the original DPOR algorithm and that wakeup trees incur only a small overhead in both time and space in practice.

  • 18.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Aronis, Stavros
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Atig, Mohamed Faouzi
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Jonsson, Bengt
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Leonardsson, Carl
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Sagonas, Konstantinos
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Stateless model checking for TSO and PSO2015In: Tools and Algorithms for the Construction and Analysis of Systems: TACAS 2015, Springer Berlin/Heidelberg, 2015, p. 353-367Conference paper (Refereed)
  • 19.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Atig, Mohamed Faouzi
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Chen, Yu-Fang
    Academia Sinica.
    Leonardsson, Carl
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Rezine, Ahmed
    Linköping University.
    MEMORAX, a Precise and Sound Tool for Automatic Fence Insertion under TSO2013In: Tools and Algorithms for the Construction and Analysis of Systems, Springer Berlin/Heidelberg, 2013, p. 530-536Conference paper (Refereed)
    Download full text (pdf)
    memorax2013.pdf
  • 20.
    Abdunabiev, Isomiddin
    et al.
    Department of Computer and Software, Hanyang University.
    Lee, Choonhwa
    Department of Computer and Software, Hanyang University.
    Hanif, Muhammad
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    An Auto-Scaling Architecture for Container Clusters Using Deep Learning2021In: 2021년도 대한전자공학회 하계종합학술대회 논문집, DBpia , 2021, p. 1660-1663Conference paper (Refereed)
    Abstract [en]

    In the past decade, cloud computing has become one of the essential techniques of many business areas, including social media, online shopping, music streaming, and many more. It is difficult for cloud providers to provision their systems in advance due to fluctuating changes in input workload and resultant resource demand. Therefore, there is a need for auto-scaling technology that can dynamically adjust resource allocation of cloud services based on incoming workload. In this paper, we present a predictive auto-scaler for Kubernetes environments to improve the quality of service. Being based on a proactive model, our proposed auto-scaling method serves as a foundation on which to build scalable and resource-efficient cloud systems.

  • 21. Abel, John H.
    et al.
    Drawert, Brian
    Hellander, Andreas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Scientific Computing. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computational Science.
    Petzold, Linda R.
    GillesPy: A Python package for stochastic model building and simulation2016In: IEEE Life Sciences Letters, E-ISSN 2332-7685, Vol. 2, p. 35-38Article in journal (Refereed)
  • 22.
    Abella, J.
    et al.
    Barcelona Supercomputing Center, Spain.
    Perez, J.
    BRTA Basque Research and Technology Alliance, Spain.
    Englund, Cristofer
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems.
    Zonooz, B.
    Navinfo Europe, Netherlands.
    Giordana, G.
    AIKO Srl, Italy.
    Donzella, C.
    Exida Development Srl, Italy.
    Cazorla, F. J.
    Barcelona Supercomputing Center, Spain.
    Mezzetti, E.
    Barcelona Supercomputing Center, Spain.
    Serra, I.
    Barcelona Supercomputing Center, Spain.
    Brando, A.
    Barcelona Supercomputing Center, Spain.
    Agirre, I.
    BRTA Basque Research and Technology Alliance, Spain.
    Eizaguirre, F.
    BRTA Basque Research and Technology Alliance, Spain.
    Bui, Thanh
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems.
    Arani, E.
    Navinfo Europe, Netherlands.
    Sarfraz, F.
    Navinfo Europe, Netherlands.
    Balasubramaniam, A.
    Navinfo Europe, Netherlands.
    Badar, A.
    Navinfo Europe, Netherlands.
    Bloise, I.
    AIKO Srl, Italy.
    Feruglio, L.
    AIKO Srl, Italy.
    Cinelli, I.
    AIKO Srl, Italy.
    Brighenti, D.
    Exida Engineering Srl, Italy.
    Cunial, D.
    Exida Engineering Srl, Italy.
    SAFEXPLAIN: Safe and Explainable Critical Embedded Systems Based on AI2023In: Proceedings -Design, Automation and Test in Europe, DATE, Institute of Electrical and Electronics Engineers Inc. , 2023Conference paper (Refereed)
    Abstract [en]

    Deep Learning (DL) techniques are at the heart of most future advanced software functions in Critical Autonomous AI-based Systems (CAIS), where they also represent a major competitive factor. Hence, the economic success of CAIS industries (e.g., automotive, space, railway) depends on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. However, there is a fundamental gap between Functional Safety (FUSA) requirements on CAIS and the nature of DL solutions. This gap stems from the development process of DL libraries and affects high-level safety concepts such as (1) explainability and traceability, (2) suitability for varying safety requirements, (3) FUSA-compliant implementations, and (4) real-time constraints. As a matter of fact, the data-dependent and stochastic nature of DL algorithms clashes with current FUSA practice, which instead builds on deterministic, verifiable, and pass/fail test-based software. The SAFEXPLAIN project tackles these challenges and targets by providing a flexible approach to allow the certification - hence adoption - of DL-based solutions in CAIS building on: (1) DL solutions that provide end-to-end traceability, with specific approaches to explain whether predictions can be trusted and strategies to reach (and prove) correct operation, in accordance to certification standards; (2) alternative and increasingly sophisticated design safety patterns for DL with varying criticality and fault tolerance requirements; (3) DL library implementations that adhere to safety requirements; and (4) computing platform configurations, to regain determinism, and probabilistic timing analyses, to handle the remaining non-determinism.

  • 23.
    Abelli, Björn
    Mälardalen University, School of Business.
    Programmeringens grunder - med exempel i C#2004Book (Other (popular science, discussion, etc.))
    Abstract [en]

    Detta är en lärobok i grundläggande programmering. Den fokuserar på att lära ut det som är gemensamt för de flesta programmeringsspråk – de grundläggande elementen och programkonstruktionerna och hur dessa relaterar till varandra, oberoende av språket. Vilket språk programmet skrivs i måste vara underordnat den uppgift programmet skall lösa. Boken går därför igenom grunderna i strukturerad programmering och visar otaliga exempel i såväl flödesdiagram, strukturdiagram, pseudokod som i källkod. I slutet av boken finns kapitel som går igenom grunderna även i objektorienterad programmering.

    Boken tonar ner inlärningen av ett specifikt programmeringsspråk, men i slutändan måste program ändå skrivas i något språk. Exemplen är skrivna i C# som har slagit igenom som ett praktiskt programmeringsspråk på mycket kort tid och redan börjat göra sitt intåg i högskolekurser i programmering.

    I första hand är boken avsedd för nybörjare i programmering på högskolenivå, som vill lära sig programmeringens grunder.

  • 24.
    Abheeshta, Putta
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparative Analysis of Software Development Practices across Software Organisations: India and Sweden2016Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. System Development Methodologies (SDM’s) have been an area of intensive research in the field of software engineering. Different software organisations adopt different development methodologies and use different development practices. The frequency of usage of development practices and acceptance factors for adoption of development methodology are crucial for software organisations. The factors of acceptance and development practices differ across geographical locations. Many challenges have been presented in the literature with respect to the mismatch of the development practices across organisations while collaborating across organisations in distributed development. There is no considerable amount of research done in context of differences across development practices and acceptance factors for adoption of a particular development methodology. Objectives. The primary objectives of the research are to find out a) differences in (i) practice usage (ii) acceptance factors such as organisational, social and cultural b) explore the reasons for the differences and also investigate consequences of such differences while collaborating, across organisations located in India and Sweden. Methods. A literature review was conducted by searching in scientific databases for identifying common agile and plan-driven development practices and acceptance theories for development methodologies. Survey was conducted across organisations located in India and Sweden to find out the usage frequency of development practices and acceptance factors. Ten interviews were conducted to investigate, reasons for differences and consequences of differences from the software practitioners from organisations located in India and Sweden. Literature evidences were used to support the results collected from interviews. Results. From the survey, organisations in India have adopted a higher frequency of plan driven practices when compared to Sweden and agile practices were adopted at higher frequency in Sweden when compared to India. The number of organisations adopting "pure agile" methodologies have been significantly higher in Sweden. There was significant differences were found across the acceptance factors such as cultural, organisational, image and career factors between India and Sweden. The factors such as cultural, social, human, business and organisational factors are responsible for such differences across development practices and acceptance factors. Challenges related to communication, coordination and control were found due to the differences, while collaborating between Indian and Sweden sites. Conclusions. The study signifies the importance of identifying the frequency of development practices and also the acceptance factors responsible for adoption of development methodologies in the software organisations. The mismatch between these practices will led to various challenges. The study draws insights into various non-technical factors such as cultural, human, organisational, business and social while collaborating between organisations. Variations across these factors will lead to many coordination, communication and control issues. Keywords: Development Practices, Agile Development, Plan Driven Development, Acceptance Factors, Global Software Development.

    Download full text (pdf)
    fulltext
  • 25.
    Abid, Muhammad Zeeshan
    KTH, School of Information and Communication Technology (ICT). Technische Universität Braunschweig.
    A Multi-leader Approach to Byzantine Fault Tolerance: Achieving Higher Throughput Using Concurrent Consensus2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Byzantine Fault Tolerant protocols are complicated and hard to implement.Today’s software industry is reluctant to adopt these protocols because of thehigh overhead of message exchange in the agreement phase and the high resourceconsumption necessary to tolerate faults (as 3 f + 1 replicas are required totolerate f faults). Moreover, total ordering of messages is needed by mostclassical protocols to provide strong consistency in both agreement and executionphases. Research has improved throughput of the execution phase by introducingconcurrency using modern multicore infrastructures in recent years. However,improvements to the agreement phase remains an open area.

    Byzantine Fault Tolerant systems use State Machine Replication to tolerate awide range of faults. The approach uses leader based consensus algorithms for thedeterministic execution of service on all replicas to make sure all correct replicasreach same state. For this purpose, several algorithms have been proposed toprovide total ordering of messages through an elected leader. Usually, a singleleader is considered to be a bottleneck as it cannot provide the desired throughputfor real-time software services. In order to achieve a higher throughput there is aneed for a solution which can execute multiple consensus rounds concurrently.

    We present a solution that enables multiple consensus rounds in parallel bychoosing multiple leaders. By enabling concurrent consensus, our approach canexecute several requests in parallel. In our approach we incorporate applicationspecific knowledge to split the total order of events into multiple partial orderswhich are causally consistent in order to ensure safety. Furthermore, a dependencycheck is required for every client request before it is assigned to a particular leaderfor agreement. This methodology relies on optimistic prediction of dependenciesto provide higher throughput. We also propose a solution to correct the course ofexecution without rollbacking if dependencies were wrongly predicted.

    Our evaluation shows that in normal cases this approach can achieve upto 100% higher throughput than conventional approaches for large numbers ofclients. We also show that this approach has the potential to perform better incomplex scenarios

    Download full text (pdf)
    Thesis_Zeeshan_Abid_20150701-Final.pdf
  • 26.
    Abo Khalaf, Mulham
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
    Near Realtime Object Detection: Optimizing YOLO Models for Efficiency and Accuracy for Computer Vision Applications2024Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The objective of this study is to improve the efficiency and accuracy of YOLO models by optimizing them, particularly when faced with limited computing resources. The urgent need for near realtime object recognition in applications such as surveillance systems and autonomous driving underscores the significance of processing speed and exceptional accuracy. The thesis focuses on the difficulties of implementing complex object identification models on low-capacity devices, namely the Jetson Orin Nano. It suggests many optimization methods to overcome these obstacles. We performed several trials and made methodological improvements to decrease processing requirements while maintaining strong object detecting performance. Key components of the research include meticulous model training, the use of assessment criteria, and the investigation of optimization effects on model performance in reallife settings. The study showcases the feasibility of achieving optimal performance in YOLO models despite limited resources, bringing substantial advancements in computer vision and machine learning.

    Download full text (pdf)
    fulltext
  • 27.
    Aboulkacim, Adil
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Form data enriching using a post OCR clustering process: Measuring accuracy of field names and field values clustering2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    With OCR technologies the text in a form can be read, the position of each word and its contents can be extracted, however the relation between the words cannot be understood. This thesis aims to solve the problem of enriching data from a structured form without any pre-set configuration using clustering. This is done using the method of a quantitative measurement of a developed prototype counting correctly clustered text boxes and a qualitative evaluation. The prototype works by feeding an image of an unfilled form and another image of a filled form which contains the data to be enriched to an OCR engine. The OCR engine extracts the text and its positions which is then run through a post-processing step which together with a modified Euclidean and fuzzy string search algorithm, both together is able to cluster field names and field values in the filled in form image. The result of the prototype for three different form structures and 15 different images for each structure ranges from 100% to 92% accuracy depending on form structure. This thesis successfully was able to show the possibility of clustering together names and values in a form i.e., enriching data from the form.

    Download full text (pdf)
    fulltext
  • 28.
    Abraham, Mark James
    et al.
    KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Apostolov, Rossen
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for High Performance Computing, PDC.
    Barnoud, Jonathan
    Univ Groningen, NL-9712 CP Groningen, Netherlands.;Univ Bristol, Intangible Real Lab, Bristol, Avon, England..
    Bauer, Paul
    KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Blau, Christian
    KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Bonvin, Alexandre M. J. J.
    Univ Utrecht, Bijvoet Ctr, Fac Sci, Utrecht, Netherlands..
    Chavent, Matthieu
    Univ Paul Sabatier, IPBS, F-31062 Toulouse, France..
    Chodera, John
    Mem Sloan Kettering Canc Ctr, Sloan Kettering Inst, Computat & Syst Biol Program, New York, NY 10065 USA..
    Condic-Jurkic, Karmen
    Mem Sloan Kettering Canc Ctr, Sloan Kettering Inst, Computat & Syst Biol Program, New York, NY 10065 USA.;Open Force Field Consortium, La Jolla, CA USA..
    Delemotte, Lucie
    KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Grubmueller, Helmut
    Max Planck Inst Biophys Chem, D-37077 Gottingen, Germany..
    Howard, Rebecca
    KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Jordan, E. Joseph
    Stockholm Univ, Dept Biochem & Biophys, Sci Life Lab, Box 1031, SE-17121 Solna, Sweden..
    Lindahl, Erik
    KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Ollila, O. H. Samuli
    Univ Helsinki, Inst Biotechnol, SF-00100 Helsinki, Finland..
    Selent, Jana
    Pompeu Fabra Univ, Hosp del Mar Med Res Inst IMIM, Res Programme Biomed Informat, Barcelona 08002, Spain.;Pompeu Fabra Univ, Dept Expt & Hlth Sci, Barcelona 08002, Spain..
    Smith, Daniel G. A.
    Mol Sci Software Inst, Blacksburg, VA 24060 USA..
    Stansfeld, Phillip J.
    Univ Oxford, Dept Biochem, Oxford OX1 2JD, England.;Univ Warwick, Sch Life Sci, Coventry CV4 7AL, W Midlands, England.;Univ Warwick, Dept Chem, Coventry CV4 7AL, W Midlands, England..
    Tiemann, Johanna K. S.
    Univ Leipzig, Fac Med, Inst Med Phys & Biophys, D-04107 Leipzig, Germany..
    Trellet, Mikael
    Univ Utrecht, Bijvoet Ctr, Fac Sci, Utrecht, Netherlands..
    Woods, Christopher
    Univ Bristol, Bristol BS8 1TH, Avon, England..
    Zhmurov, Artem
    KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Sharing Data from Molecular Simulations2019In: Journal of Chemical Information and Modeling, ISSN 1549-9596, E-ISSN 1549-960X, Vol. 59, no 10, p. 4093-4099Article in journal (Refereed)
    Abstract [en]

    Given the need for modern researchers to produce open, reproducible scientific output, the lack of standards and best practices for sharing data and workflows used to produce and analyze molecular dynamics (MD) simulations has become an important issue in the field. There are now multiple well-established packages to perform molecular dynamics simulations, often highly tuned for exploiting specific classes of hardware, each with strong communities surrounding them, but with very limited interoperability/transferability options. Thus, the choice of the software package often dictates the workflow for both simulation production and analysis. The level of detail in documenting the workflows and analysis code varies greatly in published work, hindering reproducibility of the reported results and the ability for other researchers to build on these studies. An increasing number of researchers are motivated to make their data available, but many challenges remain in order to effectively share and reuse simulation data. To discuss these and other issues related to best practices in the field in general, we organized a workshop in November 2018 (https://bioexcel.eu/events/workshop-on-sharing-data-from-molecular-simulations/). Here, we present a brief overview of this workshop and topics discussed. We hope this effort will spark further conversation in the MD community to pave the way toward more open, interoperable, and reproducible outputs coming from research studies using MD simulations.

  • 29.
    Abrahamsson, Sara
    Linköping University, Department of Computer and Information Science.
    A model to evaluate front-end frameworks for single page applications written in JavaScript2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Despite a constantly growing selection of front-end JavaScript frameworks, there is a lack of research to guide the choice of which one to use in a software project. Instead, the decision is generally based on experience and personal preferences within the team. The aim of this thesis is therefore to present a structured evaluation model to provide for more informed decisions. A preliminary study is carried out where the most important qualities of a framework are identified, both according to previous literature and to practitioners. The pre-study result is used to construct a structured model to assess framework performance for the identified qualities. Finally, a test of the model is carried out to see if it can guide the choice of framework in a specific project. The study shows that the design of the model does contribute with important insights on framework performance in prioritized quality areas and the trade-offs that this entails for other important qualities. Thus, the model provides necessary information to make well-founded decisions. Furthermore, it fills the gap in contemporary research by providing an understanding of what is important in a framework according to practitioners.

    Download full text (pdf)
    fulltext
  • 30.
    Abrahamsson, William
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Theo, Davnert
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Grid-Based Focus Stacking InMicroscopic Photography: Utilizing Grid Systems For Noise Reduction2024Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Microscope cameras possess high zoom capabilities and are frequently used for quality control in various industries. Visus Technology is a company that manufactures and sells these cameras, along with control software. The problem is that these cameras can only be set to one focus depth at a time. Therefore, we were tasked with implementing a fast focus stacking algorithm that combines multiple input images with different focus depths, resulting in a composite image with consistent focus throughout. This will later be integrated into the control software.

    We initially approached the problem by choosing pixels with the highest Laplacian intensity. This resulted in very noisy output images since the blurry images could sometimes produce high-intensity pixels where they were not supposed to be. To address this, we implemented a grid-based focus stacking algorithm. The algorithm divides the input images into tiles, filtering out the blurriest tiles to exclude them from the stacking process. The result is a composite image with significantly reduced blur.

    Download full text (pdf)
    fulltext
  • 31.
    Abualhaija, Sallam
    et al.
    University of Luxembourg, LUX.
    Fucci, Davide
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Dalpiaz, Fabiano
    Utrecht University, NLD.
    Franch, Xavier
    Universitat Politècnica de Catalunya, ESP.
    3rd workshop on natural language processing for requirements engineering (NLP4RE'20)2020In: CEUR Workshop Proceedings / [ed] Sabetzadeh M.,Vogelsang A.,Abualhaija S.,Borg M.,Dalpiaz F.,Daneva M.,Fernandez N.C.,Franch X.,Fucci D.,Gervasi V.,Groen E.,Guizzardi R.,Herrmann A.,Horkoff J.,Mich L.,Perini A.,Susi A., CEUR-WS , 2020, Vol. 2584Conference paper (Refereed)
    Download full text (pdf)
    3rd workshop on natural language processing for requirements engineering (NLP4RE'20)
  • 32.
    Abu-Sheikh, Khalil
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Reviewing and Evaluating Techniques for Modeling and Analyzing Security Requirements2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The software engineering community recognized the importance of addressing security requirements with other functional requirements from the beginning of the software development life cycle. Therefore, there are some techniques that have been developed to achieve this goal. Thus, we conducted a theoretical study that focuses on reviewing and evaluating some of the techniques that are used to model and analyze security requirements. Thus, the Abuse Cases, Misuse Cases, Data Sensitivity and Threat Analyses, Strategic Modeling, and Attack Trees techniques are investigated in detail to understand and highlight the similarities and differences between them. We found that using these techniques, in general, help requirements engineer to specify more detailed security requirements. Also, all of these techniques cover the concepts of security but in different levels. In addition, the existence of different techniques provides a variety of levels for modeling and analyzing security requirements. This helps requirements engineer to decide which technique to use in order to address security issues for the system under investigation. Finally, we found that using only one of these techniques will not be suitable enough to satisfy the security requirements of the system under investigation. Consequently, we consider that it would be beneficial to combine the Abuse Cases or Misuse Cases techniques with the Attack Trees technique or to combine the Strategic Modeling and Attack Trees techniques together in order to model and analyze security requirements of the system under investigation. The concentration on using the Attack Trees technique is due to the reusability of the produced attack trees, also this technique helps in covering a wide range of attacks, thus covering security concepts as well as security requirements in a proper way.

    Download full text (pdf)
    FULLTEXT01
  • 33.
    Acharya, Mod Nath
    et al.
    Blekinge Institute of Technology, School of Computing.
    Aslam, Nazam
    Blekinge Institute of Technology, School of Computing.
    Coordination in Global Software Development: Challenges, associated threats, and mitigating practices2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Global Software Development (GSD) is an emerging trend in today's software world in which teams are geographically dispersed, either in close proximity or globally. GSD provides certain advantages to development companies like low development cost, access to cheap and skilled labour etc. This type of development is noted as a more risky and challenging as compared to projects developed with teams under same roof. Inherently the nature of GSD projects are cooperative in which many software developers work on a common project, share information and coordinate activities. Coordination is a fundamental part of software development. GSD comprises different types of development systems i.e. insourcing, outsourcing, nearshoring, or farshoring, whatever the types of development systems selected by a company there exist the challenges to coordination. Therefore the knowledge of potential challenges, associated threats to coordination and practices to mitigate them plays a vital role for running a successful global project.

    Download full text (pdf)
    FULLTEXT01
  • 34.
    Achilleos, Achilleas
    et al.
    Frederick University, Cyprus.
    Mettouris, Christos
    University of Cyprus, Cyprus.
    Yeratziotis, Alexandros
    University of Cyprus, Cyprus.
    Papadopoulos, George
    University of Cyprus, Cyprus.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Huber, Florian
    SYNYO GmbH, Austria.
    Jäger, Bernhard
    SYNYO GmbH, Austria.
    Leitner, Peter
    SYNYO GmbH, Austria.
    Ocsovszky, Zsófia
    BioTalentum Ltd, Hungary.
    Dinnyés, András
    BioTalentum Ltd, Hungary.
    SciChallenge: A Social Media Aware Platform for Contest-Based STEM Education and Motivation of Young Students2019In: IEEE Transactions on Learning Technologies, E-ISSN 1939-1382, Vol. 12, no 1, p. 98-111Article in journal (Refereed)
    Abstract [en]

    Scientific and technological innovations have become increasingly important as we face the benefits and challenges of both globalization and a knowledge-based economy. Still, enrolment rates in STEM degrees are low in many European countries and consequently there is a lack of adequately educated workforce in industries. We believe that this can be mainly attributed to pedagogical issues, such as the lack of engaging hands-on activities utilized for science and math education in middle and high schools. In this paper, we report our work in the SciChallenge European project, which aims at increasing the interest of pre-university students in STEM disciplines, through its distinguishing feature, the systematic use of social media for providing and evaluation of the student-generated content. A social media-aware contest and platform were thus developed and tested in a pan-European contest that attracted >700 participants. The statistical analysis and results revealed that the platform and contest positively influenced participants STEM learning and motivation, while only the gender factor for the younger study group appeared to affect the outcomes (confidence level – p<.05).

  • 35.
    Adborn, Mats
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Visualisering av datastrukturer: Utveckling av ett tolkningsverktyg2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Interpretation and assimilation of data structures, organized information andsource code files are frequently occuring during software development. This kindof information is stored in text-based form and its understanding requires greatthoroughness and investment in time from the developer's part. This thesisdescribes the development of a utility program prototype, which automates theparsing of XML data and source code files from the programming languages Cand C++, in purpose of trying to simplify the interpretation process. The programcreates and presents a visual graph of the structure found, using an algorithmwhich can present arbitrary large XML files as well as a limited number ofconcurrent source code files. The effects on the interpretation time and itsreliablity has been evaluated in a survey among software development students.The result showed a certain increase in the number of correct conclusions fromthe participants' side after studying the visual representation compared to itsorignial text-based form. The amount of time used was not measured other thansubjectively by the users themselves, of which a predominant proportionconsidered a reduction in needed time when using the graphical representation.The thesis shows that the use of this or an equivalent utility can enhance theassimilation of data structures by increasing the rate of reliabilty whilesimultaneously decreasing the needed amount of time. Still, the quantifyable gainsof these results remains statistically largely uncertain.

    Download full text (pdf)
    malu0900-ex-fulltext
  • 36.
    Adolfsen, Linus
    Blekinge Institute of Technology, School of Engineering.
    Parameterstyrd tillverkning av rör för marina fartyg2012Student thesis
    Abstract [sv]

    Innehållet i denna rapport är ett resultat av ett moment i utbildningen till Utvecklingsingenjör i Maskinteknik. Arbetet har skett genom ett samarbete mellan Linus Adolfsen, Kockums AB och Blekinge Tekniska Högskola. Rapporten behandlar i stort två moment, ett praktiskt och ett teoretiskt. Den första delen, den praktiska, gick ut på att finna en metod för att överbrygga steget från modell till verklighet på ett effektivt sätt. Detta resulterade i en egenutvecklad programvara som kan läsa in utdatafilen från Tribon (CAD programvara) och översätta detta till en programfil för Herber CNC 90 bockningsmaskin. Den andra delen är teoretisk och är en analys av verksamheten utifrån perspektivet att medge förtillverkning. Resultatet blev en analys av den berörda verksamheten med förslag på hur man ska åtgärda de problem och hinder som finns idag. Det gav även stort upphov till förslag på vidare studier.

    Download full text (pdf)
    FULLTEXT01
  • 37.
    Adolfsson, Emma
    et al.
    Örebro University, School of Medical Sciences. Department of Laboratory Medicine.
    Jonasson, Jon
    Department of Laboratory Medicine, Örebro University Hospital, Örebro, Sweden.
    Kashyap, Aniruddh
    Department of Laboratory Medicine, Örebro University Hospital, Örebro, Sweden.
    Nordensköld, Anna
    Department of Cardiology, Faculty of Medicine and Health, Örebro University Hospital, Örebro, Sweden.
    Green, Anna
    Örebro University, School of Medical Sciences. Örebro University Hospital. Department of Laboratory Medicine.
    CNV-Z; a new tool for detecting copy number variation in next generation sequencing data2023In: SoftwareX, E-ISSN 2352-7110, Vol. 24, article id 101530Article in journal (Refereed)
    Abstract [en]

    We developed an efficient approach to diagnostic copy number analysis of targeted gene panel or whole exome sequence (WES) data. Here we present CNV-Z as a new tool for detection of copy number variants (CNVs). Deletions and duplications of chromosomal regions are widely implicated in both genomic evolution and genetic disorders. However, calling CNVs from targeted or exome sequence data is challenging. In most cases, the copy number of a chromosomal region is estimated as the depth of reads mapping to a certain bin or sliding window divided by the expected number of reads derived from a set of reference samples. This approach will inevitably miss smaller CNVs on an irregular basis, and quite frequently results in a disturbing number of false positive CNVs. We developed an alternative approach to detect CNVs based on deviation from expected read depth per position, instead of region. Cautiously used, the cohort of samples in the same run will do as a reference. With appropriate filtering, given high quality DNA and a set of suitable reference samples, CNV-Z detects CNVs ranging in length from one nucleotide to an entire chromosome, with few false positives. Performance is proved by benchmarking using both in-house targeted gene panel NGS data and a publicly available NGS dataset, both sets with multiplex ligation-dependent amplification probe (MLPA) validated CNVs. The outcome shows that CNV-Z detects single- and multi-exonic CNVs with high specificity and sensitivity using different kind of NGS data. On gene level, CNV-Z shows both excellent sensitivity and specificity. Compared to competing CNV callers, CNV-Z shows higher specificity and positive predictive value for detecting exonic CNVs.

  • 38.
    Adolfsson, Fredrik
    Linköping University, Department of Computer and Information Science, Software and Systems.
    A Model-Based Approach to Hands Overlay for Augmented Reality2021Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Augmented Reality is a technology where the user sees the environment mixed with a virtual reality containing things such as text, animations, pictures, and videos. Remote guidance is a sub-field of Augmented Reality where guidance is given remotely to identify and solve problems without being there in person. Using hands overlay, the guide can use his or her hand to point and show gestures in real-time. To do this one needs to track the hands and create a video stream that represents them. The video stream of the hands is then overlaid on top of the video from the individual getting help. A solution currently used in the industry is to use image segmentation, which is done by segmenting an image to foreground and background to decide what to include. This requires distinct differences between the pixels that should be included and the ones that should be discarded to work correctly. This thesis instead investigates a model-based approach to hand tracking, where one tracks points of interest on the hands to build a 3D model of them. A model-based solution is based on sensor data, meaning that it would not have the limitations that image segmentation has. A prototype is developed and integrated into the existing solution. The hand modeling is done in a Unity application and then transferred into the existing application. The results show that there is a clear but not too significant overhead, so it can run on a normal computer. The prototype works as a proof of concept and shows the potential of a model-based approach.

    Download full text (pdf)
    fulltext
  • 39.
    Adolfsson, Victor
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    The State of the Art in Distributed Mobile Robotics2001Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Distributed Mobile Robotics (DMR) is a multidisciplinary research area with many open research questions. This is a survey of the state of the art in Distributed Mobile Robotics research. DMR is sometimes referred to as cooperative robotics or multi-robotic systems. DMR is about how multiple robots can cooperate to achieve goals and complete tasks better than single robot systems. It covers architectures, communication, learning, exploration and many other areas presented in this master thesis.

    Download full text (pdf)
    FULLTEXT01
  • 40.
    Afram, Gabriel
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Genomsökning av filsystem för att hitta personuppgifter: Med Linear chain conditional random field och Regular expression2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The new General Data Protection Regulation (GDPR) Act will apply to all companies within the European Union after 25 May. This means stricter legal requirements for companies that in some way store personal data. The goal of this project is therefore to make it easier for companies to meet the new legal requirements. This by creating a tool that searches file systems and visually shows the user in a graphical user interface which files contain personal data. The tool uses Named entity recognition with the Linear chain conditional random field algorithm which is a type of supervised learning method in machine learning. This algorithm is used in the project to find names and addresses in files. The different models are trained with different parameters and the training is done using the stanford NER library in Java. The models are tested by a test file containing 45,000 words where the models themselves can predict all classes to the words in the file. The models are then compared with each other using the measurements of precision, recall and F-score to find the best model. The tool also uses Regular Expression to find emails, IP numbers, and social security numbers. The result of the final machine learning model shows that it does not find all names and addresses, but that can be improved by increasing exercise data. However, this is something that requires a more powerful computer than the one used in this project. An analysis of how the Swedish language is built would also need to be done to apply the most appropriate parameters for the training of the model.

    Download full text (pdf)
    fulltext
  • 41.
    Aftarczuk, Kamila
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Evaluation of selected data mining algorithms implemented in Medical Decision Support Systems2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The goal of this master’s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS’s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Naïve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Naïve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.

    Download full text (pdf)
    FULLTEXT01
  • 42. Afzal, Wasif
    Lessons from applying experimentation in software engineering prediction systems2008Conference paper (Refereed)
    Abstract [en]

    Within software engineering prediction systems, experiments are undertaken primarliy to investigate relationships and to measure/compare models' accuracy. This paper discusses our experience and presents useful lessons/guidelines in experimenting with software engineering prediction systems. For this purpose, we use a typical software engineering experimentation process as a baseline. We found that the typical software engineering experimentation process in software engineering is supportive in developing prediction systems and have highlighted issues more central to the domain of software engineering prediction systems.

    Download full text (pdf)
    Lessons from applying experimentation in software engineering prediction systems
  • 43.
    Afzal, Wasif
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Metrics in Software Test Planning and Test Design Processes2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Software metrics plays an important role in measuring attributes that are critical to the success of a software project. Measurement of these attributes helps to make the characteristics and relationships between the attributes clearer. This in turn supports informed decision making. The field of software engineering is affected by infrequent, incomplete and inconsistent measurements. Software testing is an integral part of software development, providing opportunities for measurement of process attributes. The measurement of software testing process attributes enables the management to have better insight in to the software testing process. The aim of this thesis is to investigate the metric support for software test planning and test design processes. The study comprises of an extensive literature study and follows a methodical approach. This approach consists of two steps. The first step comprises of analyzing key phases in software testing life cycle, inputs required for starting the software test planning and design processes and metrics indicating the end of software test planning and test design processes. After establishing a basic understanding of the related concepts, the second step identifies the attributes of software test planning and test design processes including metric support for each of the identified attributes. The results of the literature survey showed that there are a number of different measurable attributes for software test planning and test design processes. The study partitioned these attributes in multiple categories for software test planning and test design processes. For each of these attributes, different existing measurements are studied. A consolidation of these measurements is presented in this thesis which is intended to provide an opportunity for management to consider improvement in these processes.

    Download full text (pdf)
    FULLTEXT01
  • 44. Afzal, Wasif
    Search-based approaches to software fault prediction and software testing2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software verification and validation activities are essential for software quality but also constitute a large part of software development costs. Therefore efficient and cost-effective software verification and validation activities are both a priority and a necessity considering the pressure to decrease time-to-market and intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions related to software quality, when to stop testing, testing schedule and testing resource allocation needs to be as accurate as possible. This thesis investigates the application of search-based techniques within two activities of software verification and validation: Software fault prediction and software testing for non-functional system properties. Software fault prediction modeling can provide support for making important decisions as outlined above. In this thesis we empirically evaluate symbolic regression using genetic programming (a search-based technique) as a potential method for software fault predictions. Using data sets from both industrial and open-source software, the strengths and weaknesses of applying symbolic regression in genetic programming are evaluated against competitive techniques. In addition to software fault prediction this thesis also consolidates available research into predictive modeling of other attributes by applying symbolic regression in genetic programming, thus presenting a broader perspective. As an extension to the application of search-based techniques within software verification and validation this thesis further investigates the extent of application of search-based techniques for testing non-functional system properties. Based on the research findings in this thesis it can be concluded that applying symbolic regression in genetic programming may be a viable technique for software fault prediction. We additionally seek literature evidence where other search-based techniques are applied for testing of non-functional system properties, hence contributing towards the growing application of search-based techniques in diverse activities within software verification and validation.

    Download full text (pdf)
    FULLTEXT01
  • 45. Afzal, Wasif
    Search-Based Prediction of Software Quality: Evaluations and Comparisons2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software verification and validation (V&V) activities are critical for achieving software quality; however, these activities also constitute a large part of the costs when developing software. Therefore efficient and effective software V&V activities are both a priority and a necessity considering the pressure to decrease time-to-market and the intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions that affects software quality, e.g., how to allocate testing resources, develop testing schedules and to decide when to stop testing, needs to be as stable and accurate as possible. The objective of this thesis is to investigate how search-based techniques can support decision-making and help control variation in software V&V activities, thereby indirectly improving software quality. Several themes in providing this support are investigated: predicting reliability of future software versions based on fault history; fault prediction to improve test phase efficiency; assignment of resources to fixing faults; and distinguishing fault-prone software modules from non-faulty ones. A common element in these investigations is the use of search-based techniques, often also called metaheuristic techniques, for supporting the V&V decision-making processes. Search-based techniques are promising since, as many problems in real world, software V&V can be formulated as optimization problems where near optimal solutions are often good enough. Moreover, these techniques are general optimization solutions that can potentially be applied across a larger variety of decision-making situations than other existing alternatives. Apart from presenting the current state of the art, in the form of a systematic literature review, and doing comparative evaluations of a variety of metaheuristic techniques on large-scale projects (both industrial and open-source), this thesis also presents methodological investigations using search-based techniques that are relevant to the task of software quality measurement and prediction. The results of applying search-based techniques in large-scale projects, while investigating a variety of research themes, show that they consistently give competitive results in comparison with existing techniques. Based on the research findings, we conclude that search-based techniques are viable techniques to use in supporting the decision-making processes within software V&V activities. The accuracy and consistency of these techniques make them important tools when developing future decision-support for effective management of software V&V activities.

    Download full text (pdf)
    FULLTEXT01
  • 46.
    Afzal, Wasif
    Blekinge Institute of Technology.
    Using faults-slip-through metric as a predictor of fault-proneness2010In: Proceedings - Asia-Pacific Software Engineering Conference, APSEC, IEEE , 2010Conference paper (Refereed)
    Abstract [en]

    The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. We applied eight classification techniques to the task of identifying fault-prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Na\"{i}ve Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. \emph{Results:} Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, GP showed impressive results in comparison with other techniques for predicting fault-prone modules at both integration and system test levels. The use of faults-slip-through metric in general provided good prediction results at the two test levels. The accuracy of GP is statistically significant in comparison with majority of the techniques for predicting fault-prone modules at integration and system test levels. (ii) Faults-slip-through metric has the potential to be a generally useful predictor of fault-proneness at integration and system test levels.

    Download full text (pdf)
    fulltext
  • 47. Afzal, Wasif
    et al.
    Ghazi, Ahmad Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Itkonen, Juha
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Andrews, Anneliese
    Bhatti, Khurram
    An experiment on the effectiveness and efficiency of exploratory testing2015In: Empirical Software Engineering, ISSN 1382-3256, Vol. 20, no 3, p. 844-878Article in journal (Refereed)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

    Download full text (pdf)
    fulltext
  • 48. Afzal, Wasif
    et al.
    Torkar, Richard
    A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data2008Conference paper (Refereed)
    Abstract [en]

    There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.

  • 49.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Torkar, Richard
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Incorporating Metrics in an Organizational Test Strategy2008Conference paper (Refereed)
    Abstract [en]

    An organizational level test strategy needs to incorporate metrics to make the testing activities visible and available to process improvements. The majority of testing measurements that are done are based on faults found in the test execution phase. In contrast, this paper investigates metrics to support software test planning and test design processes. We have assembled metrics in these two process types to support management in carrying out evidence-based test process improvements and to incorporate suitable metrics as part of an organization level test strategy. The study is composed of two steps. The first step creates a relevant context by analyzing key phases in the software testing lifecycle, while the second step identifies the attributes of software test planning and test design processes along with metric(s) support for each of the identified attributes.

    Download full text (pdf)
    FULLTEXT01
  • 50. Afzal, Wasif
    et al.
    Torkar, Richard
    On the application of genetic programming for software engineering predictive modeling: A systematic review2011In: Expert Systems with Applications, ISSN 0957-4174 , Vol. 38, no 9, p. 11984-11997Article, review/survey (Refereed)
    Abstract [en]

    The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995-2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling: the results are inconclusive for software cost/effort/size estimation.

1234567 1 - 50 of 4966
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf