Digitala Vetenskapliga Arkivet

Endre søk
Begrens søket
123 1 - 50 of 132
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Abdulla, Parosh
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Atig, Mohamed Faouzi
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Jonsson, Bengt
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Lång, Magnus
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Ngo, Tuan-Phong
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Sagonas, Konstantinos
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Optimal stateless model checking for reads-from equivalence under sequential consistency2019Inngår i: Proceedings of the ACM on Programming Languages, E-ISSN 2475-1421Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a new approach for stateless model checking (SMC) of multithreaded programs under Sequential Consistency (SC) semantics.  To combat state-space explosion, SMC is often equipped with a partial-order reduction technique, which defines an equivalence on executions, and only needs to explore one execution in each equivalence class.  Recently, it has been observed that the commonly used equivalence of Mazurkiewicz traces can be coarsened but still cover all program crashes and assertion violations.  However, for this coarser equivalence, which preserves only the reads-from relation from writes to reads, there is no SMC algorithm which is (i) optimal in the sense that it explores precisely one execution in each reads-from equivalence class, and (ii) efficient in the sense that it spends polynomial effort per class.  \end{inparaenum} We present the first SMC algorithm for SC that is both optimal and efficient in practice, meaning that it spends polynomial time per equivalence class on all programs that we have tried.  This is achieved by a novel test that checks whether a given reads-from relation can arise in some execution.  Our experimental results show that Nidhugg/rfsc, although slower than the fastest SMC tools in programs where tools happen to examine the same number of executions, always scales similarly or better than them, and outperforms them by an exponential factor in programs where the reads-from equivalence is coarser than the standard one. We also present two non-trivial use cases where the new equivalence is particularly effective, as well as the significant performance advantage that Nidhugg/rfsc offers compared to state-of-the-art SMC and systematic concurrency testing tools.

    Fulltekst (pdf)
    fulltext
  • 2. Abelson, Harold
    et al.
    Sussman, Gerald Jay
    Henz, Martin
    Wrigstad, Tobias
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Structure and Interpretation of Computer Programs: JavaScript Edition2022Bok (Annet vitenskapelig)
  • 3.
    Adwent, Ann-Kristin
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Shared IT Service Center i kommuner2012Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    To be able to maintain an adequate IT

    service for various users and needs,

    more municipalities are looking at the

    possibility for mergers of local IT

    departments. One solution for merging

    multiple services/functions and creating

    synergy opportunities is called Shared

    Service Center (SSC). The concept of SSC

    is that the administrative service

    itself becomes a core activity within

    the organization. The aim of this thesis

    is to provide municipalities who are

    considering a merging of their local IT

    departments with recommendations on how

    to design the Shared IT Service Center.

    Recommendations are outlined based on an

    analysis of IT-management literature,

    reports and by conducting a study on

    three ongoing collaborations.

    The conclusions drawn from the study

    suggest guidelines for the design of a

    Shared IT Service Center for

    municipalities. These are as following:

    Create a vision associated with a

    specific and structured target state.

    Identify needs for different target

    groups in municipalities and set a

    common standard.

    Create a clear and practical model/SLA

    appearance of the cooperation and

    agreement.

    Ensure the individual municipalities

    commissioning skills in order to not

    lose it in the context of a common IT

    operation.

    Find an organization that is democratic

    for member municipalities and

    facilitates long-term partnership.

    Specify the operation and maintenance so

    that it can be regulated and controlled

    Establish a common help desk.

    Establish a common standard and

    consolidated infrastructure before the

    introduction of a common technology platform.

    Fulltekst (pdf)
    fulltext
  • 4.
    Ahani, Ghafour
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Optimal Scheduling for Timely Information in Communication Systems2021Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The explosive growth of data in information society poses significant challenges in the timely delivery of information in the context of communication networks. Hence, optimal utilization of scarce network resources is crucial. This dissertation contributes to several aspects related to the timely delivery of information, including scheduling of data flows between sources and destinations in a network, scheduling of content caching in a base station of mobile networks, and scheduling of information collection. Two important metrics, namely, delivery deadline and information freshness, are accounted for. Mathematical models and tailored solution approaches are developed via tools from optimization.

    Five research papers are included in the dissertation. Paper I studies a flow routing and scheduling problem with delivery deadline. This type of problem arises in many applications such as data exchange in scientific projects or data replication in data centers where large amounts of data need to be timely distributed across the globe. Papers II, III, and IV inves­tigate content caching along time in a base station. Content caching at the network’s edge has recently been considered a cost­efficient way of providing users with their requested informa­tion. In Paper II, the schedule for updating the cache is optimized with respect to the content requests of users and the popularity of contents over time. Paper III, as an extension of Paper II, addresses the question of how to keep the cache information fresh, as all contents can not be up­dated due to the limited capacity of the backhaul link. The freshness of information is quantified via the notion of age of information (AoI). Paper IV investigates joint optimization of content caching as well as recommendation; the latter contributes to satisfying content requests in case of a cache miss. Paper V studies optimal scheduling of information collection from a set of sensor nodes via an unmanned aerial vehicle. The objective is to keep the overall AoI as small as possible.

    In these studies, analysis of problem complexity is provided, and time­efficient solution al­gorithms based on column generation, Lagrangian decomposition, and graph labeling are de­veloped. The algorithms also yield a bound of global optimum, that can be used to assess the performance of any given solution. The effectiveness of the algorithms in obtaining near­optimal solutions is demonstrated via extensive simulations.

    Fulltekst (pdf)
    UUThesis_Ahani,G_2021
    Download (jpg)
    presentationsbild
  • 5.
    Ahani, Ghafour
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Yuan, Di
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Accounting for Information Freshness in Scheduling of Content Caching2020Inngår i: ICC 2020: 2020 IEEE International Conference on Communications (ICC), 2020Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we study the problem of optimal scheduling of content placement along time in a base station with limited cache capacity, taking into account jointly the offloading effect and freshness of information. We model offloading based on popularity in terms of the number of requests and information freshness based on the notion of age of information (AoI). The objective is to reduce the load of backhaul links as well as the AoI of contents in the cache via a joint cost function. For the resulting optimization problem, we prove its hardness via a reduction from the Partition problem. Next, via a mathematical reformulation, we derive a solution approach based on column generation and a tailored rounding mechanism. Finally, we provide performance evaluation results showing that our algorithm provides near-optimal solutions.

  • 6.
    Ahani, Ghafour
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Yuan, Di
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Optimal scheduling of content caching subject to deadline2020Inngår i: IEEE Open Journal of the Communications Society, E-ISSN 2644-125X, Vol. 1, s. 293-307Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Content caching at the edge of network is a promising technique to alleviate the burden of backhaul networks. In this paper, we consider content caching along time in a base station with limited cache capacity. As the popularity of contents may vary over time, the contents of cache need to be updated accordingly. In addition, a requested content may have a delivery deadline within which the content needs to be obtained. Motivated by these, we address optimal scheduling of content caching in a time-slotted system under delivery deadline and cache capacity constraints. The objective is to minimize a cost function that captures the load of backhaul links. For our optimization problem, we prove its NP-hardness via a reduction from the Partition problem. For problem solving, via a mathematical reformulation, we develop a solution approach based on repeatedly applying a column generation algorithm and a problem-tailored rounding algorithm. In addition, two greedy algorithms are developed based on existing algorithms from the literature. Finally, we present extensive simulations that verify the effectiveness of our solution approach in obtaining near-to-optimal solutions in comparison to the greedy algorithms. The solutions obtained from our solution approach are within 1.6% from global optimality.

    Fulltekst (pdf)
    fulltext
  • 7.
    Ahani, Ghafour
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Yuan, Di
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Sun, Sumei
    Institute for Infocomm Research, Singapore, Singapore;Singapore Institute of Technology, Singapore, Singapore.
    Optimal Scheduling of Age-centric Caching: Tractability and Computation2022Inngår i: IEEE Transactions on Mobile Computing, ISSN 1536-1233, E-ISSN 1558-0660, Vol. 21, s. 2939-2954Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The notion of age of information (AoI) has become an important performance metric in network and control systems. Information freshness, represented by AoI, naturally arises in the context of caching. We address optimal scheduling of cache updates for a time-slotted system where the contents vary in size. There is limited capacity for the cache for making updates. Each content is associated with a utility function that depends on the AoI and the time duration of absence from the cache. For this combinatorial optimization problem, we present the following contributions. First, we provide theoretical results of problem tractability. Whereas the problem is NP-hard, we prove solution tractability in polynomial time for a special case with uniform content size, by a reformulation using network flows. Second, we derive an integer linear formulation for the problem, of which the optimal solution can be obtained for small-scale scenarios. Next, via a mathematical reformulation, we derive a scalable optimization algorithm using repeated column generation. In addition, the algorithm computes a bound of global optimum, that can be used to assess the performance of any scheduling solution. Performance evaluation of large-scale scenarios demonstrates the strengths of the algorithm in comparison to a greedy schedule. Finally, we extend the applicability of our work to cyclic scheduling.

  • 8.
    Andersson, Andreas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Utilizing Neural Networks To Adaptively Demodulate And Decode Signals In An Impulsive Environment2023Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Electromagnetic disturbance can be detrimental to the performance of a radio communication system, and in today’s society where more and more electronic devices are present in our everyday life it is increasingly vital to consider man-made interference. A communication system can take into consideration the noise characteristics and by doing so will excel in such areas, however, this follows that the algorithms utilized in such systems are more computationally complex and are therefore slow. This master thesis aims to explore the possibility of a neural network-based solution that reaches the same accuracy, as existing methods, but more quickly. Numerous different existing model alternatives have been explored and a plethora of different improvement techniques have been outlined. Two models, Hannet and Lannet, have been designed and improved to enable adaptive demodulation both including or excluding decoding at the receiver in an end-to-end communication system. The evaluation results demonstrate that the proposed models are comparable and in some cases even more accurate than current standardized methods. However, the models are unable to fully learn the decoding algorithms present in the experiments. Thus even though demodulation by itself thrives, performing decoding in conjunction with demodulation is out of reach for these models.

    Fulltekst (pdf)
    fulltext
  • 9.
    Andrejev, Andrej
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Semantic Web Queries over Scientific Data2016Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    Semantic Web and Linked Open Data provide a potential platform for interoperability of scientific data, offering a flexible model for providing machine-readable and queryable metadata. However, RDF and SPARQL gained limited adoption within the scientific community, mainly due to the lack of support for managing massive numeric data, along with certain other important features – such as extensibility with user-defined functions, query modularity, and integration with existing environments and workflows.

    We present the design, implementation and evaluation of Scientific SPARQL – a language for querying data and metadata combined, represented using the RDF graph model extended with numeric multidimensional arrays as node values – RDF with Arrays. The techniques used to store RDF with Arrays in a scalable way and process Scientific SPARQL queries and updates are implemented in our prototype software – Scientific SPARQL Database Manager, SSDM, and its integrations with data storage systems and computational frameworks. This includes scalable storage solutions for numeric multidimensional arrays and an efficient implementation of array operations. The arrays can be physically stored in a variety of external storage systems, including files, relational databases, and specialized array data stores, using our Array Storage Extensibility Interface. Whenever possible SSDM accumulates array operations and accesses array contents in a lazy fashion.

    In scientific applications numeric computations are often used for filtering or post-processing the retrieved data, which can be expressed in a functional way. Scientific SPARQL allows expressing common query sub-tasks with functions defined as parameterized queries. This becomes especially useful along with functional language abstractions such as lexical closures and second-order functions, e.g. array mappers.

    Existing computational libraries can be interfaced and invoked from Scientific SPARQL queries as foreign functions. Cost estimates and alternative evaluation directions may be specified, aiding the construction of better execution plans. Costly array processing, e.g. filtering and aggregation, is thus preformed on the server, saving the amount of communication. Furthermore, common supported operations are delegated to the array storage back-ends, according to their capabilities. Both expressivity and performance of Scientific SPARQL are evaluated on a real-world example, and further performance tests are run using our mini-benchmark for array queries.

    Fulltekst (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 10.
    Aronis, Stavros
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Effective Techniques for Stateless Model Checking2018Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Stateless model checking is a technique for testing and verifying concurrent programs, based on exploring the different ways in which operations executed by the processes of a concurrent program can be scheduled. The goal of the technique is to expose all behaviours that can be a result of scheduling non-determinism. As the number of possible schedulings is huge, however, techniques that reduce the number of schedulings that must be explored to achieve verification have been developed. Dynamic partial order reduction (DPOR) is a prominent such technique.

    This dissertation presents a number of improvements to dynamic partial order reduction that significantly increase the effectiveness of stateless model checking. Central among these improvements are the Source and Optimal DPOR algorithms (and the theoretical framework behind them) and a technique that allows the observability of the interference of operations to be used in dynamic partial order reduction. Each of these techniques can exponentially decrease the number of schedulings that need to be explored to verify a concurrent program. The dissertation also presents a simple bounding technique that is compatible with DPOR algorithms and effective for finding bugs in concurrent programs, if the number of schedulings is too big to make full verification possible in a reasonable amount of time, even when the improved algorithms are used.

    All improvements have been implemented in Concuerror, a tool for applying stateless model checking to Erlang programs. In order to increase the effectiveness of the tool, the interference of the high-level operations of the Erlang/OTP implementation is examined, classified and precisely characterized. Aspects of the implementation of the tool are also described. Finally, a use case is presented, showing how Concuerror was used to find bugs and verify key correctness properties in repair techniques for the CORFU chain replication protocol.

    Fulltekst (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 11.
    Arvidsson, Ellen
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Castegren, Elias
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Clebsch, Sylvan
    Drossopoulou, Sophia
    Noble, James
    Parkinson, Matthew J.
    Wrigstad, Tobias
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Reference Capabilities for Flexible Memory Management2023Inngår i: Proceedings of the ACM on Programming Languages, E-ISSN 2475-1421, Vol. 7, nr OOPSLA2, s. 1363-1393, artikkel-id 270Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Verona is a concurrent object-oriented programming language that organises all the objects in a program into a forest of isolated regions. Memory is managed locally for each region, so programmers can control a program's memory use by adjusting objects' partition into regions, and by setting each region's memory management strategy. A thread can only mutate (allocate, deallocate) objects within one active region---its "window of mutability". Memory management costs are localised to the active region, ensuring overheads can be predicted and controlled. Moving the mutability window between regions is explicit, so code can be executed wherever it is required, yet programs remain in control of memory use. An ownership type system based on reference capabilities enforces region isolation, controlling aliasing within and between regions, yet supporting objects moving between regions and threads. Data accesses never need expensive atomic operations, and are always thread-safe.

    Fulltekst (pdf)
    fulltext
  • 12.
    Astrid, Berghult
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    A practical comparison between algebraic and statistical attacks on the lightweight cipher SIMON2016Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    In the summer of 2013 NSA released a new family of lightweight block ciphers called SIMON. However they did not publish any assessment of the security of SIMON. Since then only a few papers on this topic have been released and none of them have included an algebraic analysis. Moreover only one paper described a practical implementation of the attack. This master thesis aims to implement a practical attack, both algebraic and differential, on SIMON. In doing so we are able to make a comparison between the two different attack methods. The algebraic attack was executed with SAT-solver CryptoMiniSat2 and could break 7 rounds. The differential attack was implemented in three steps. First we created a difference distribution table (DDT) and then we identified a differential by a search algorithm for the DDT. In the last step we designed a key recovery attack to recover the last round key. The attack could break 13 rounds for a 9 round differential. With a simple expansion on the key recovery attack it has the potential to break even more rounds for the same 9 round differential. This indicate that algebraic cryptanalysis might not be such a strong tool since it could only break 7 rounds. Furthermore, if a generic algebraic attack does not work on SIMON it has little or no chance of being successful on a more complex cipher. In other words this algebraic attack may serve as a benchmark for the efficiency of generic algebraic attacks.

    Fulltekst (pdf)
    fulltext
  • 13.
    Axel, Lindegren
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Partitioning temporal networks: A study of finding the optimal partition of temporal networks using community detection2018Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Many of the algorithms used for community detection in temporal networks have been adapted from static network theory. A common approach in dealing with the temporal dimension is to create multiple static networks from one temporal, based on a time condition. In this thesis, focus lies on identifying the optimal partitioning of a few temporal networks. This is done by utilizing the popular community detection algorithm called Generalized Louvain. Output of the Generalized Louvain comes in two parts. First, the created community structure, i.e. how the network is connected. Secondly, a measure called modularity, which is a scalar value representing the quality of the identified community structure. The methodology used is aimed at creating a comparable result by normalizing modularity. The normalization process can be explained in two major steps: 1) study the effects on modularity when partitioning a temporal network in an increasing number of slices. 2) study the effects on modularity when varying the number of connections (edges) in each time slice. The results show that the created methodology yields comparable results on two out of the four here tested temporal networks, implying that it might be more suited for some networks than others. This can serve as an indication that there does not exist a general model for community detection in temporal networks. Instead, the type of network is key to choosing the method.

    Fulltekst (pdf)
    fulltext
  • 14.
    Aziz, Yama
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Exploring a keyword driven testing framework: a case study at Scania IT2017Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    The purpose of this thesis is to investigate organizational quality assurance through the international testing standard ISO 29119. The focus will be on how an organization carries out testing processes and designs and implements test cases. Keyword driven testing is a test composition concept in ISO 29119 and suitable for automation. This thesis will answer how keyword driven testing can facilitate the development of maintainable test cases and support test automation in an agile organization.

    The methodology used was a qualitative case study including semi-structured interviews and focus groups with agile business units within Scania IT. Among the interview participants were developers, test engineers, scrum masters and a unit manager.

    The results describe testing practices carried out in several agile business units, maintainability issues with test automation and general ideas of how test automation should be approached. Common issues with test automation were test cases failing due to changed test inputs, inexperience with test automation frameworks and lack of resources due to project release cycle.

    This thesis concludes that keyword driven testing has the potential of solving several maintainability issues with test cases breaking. However, the practicality and effectiveness of said potential remain unanswered. Moreover, successfully developing an automated keyword driven testing framework requires integration with existing test automation tools and considering the agile organizational circumstances.

    Fulltekst (pdf)
    Exploring a keyword driven testing framework
  • 15.
    Backlund, Ludvig
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    A technical overview of distributed ledger technologies in the Nordic capital market.2016Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    This thesis examines how Distributed Ledger Technologies (DLTs) could be utilized in capital markets in general and in the Nordic capital market in particular. DLTs were introduced with the so called cryptocurrency Bitcoin in 2009 and has in the last few years been of interest to various financial institutions as a means to streamline financial processes. By combining computer scientific concepts such as public-key cryptography and consensus algorithms DLTs make it possible to keep shared databases with limited trust among the participators and without the use of a trusted third party. In this thesis various actors on the Nordic capital market were interviewed and their stance on DLTs were summarized. In addition to this a Proof of Concept of a permissioned DLT application for ownership registration of securities was constructed. It was found that all the interviewees were generally optimistic about DLTs potential to increase the efficiency of capital markets. The technology needs to be adopted to handle the capital markets demand for privacy and large transaction volumes, but there is a general agreement among the interviewees that these issues will be solved. The biggest challenge for an adoption of DLTs seem to lie in that of finding a common industry-wide standard.

    Fulltekst (pdf)
    fulltext
  • 16.
    Badiozamany, Sobhan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Real-time data stream clustering over sliding windows2016Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    In many applications, e.g. urban traffic monitoring, stock trading, and industrial sensor data monitoring, clustering algorithms are applied on data streams in real-time to find current patterns. Here, sliding windows are commonly used as they capture concept drift.

    Real-time clustering over sliding windows is early detection of continuously evolving clusters as soon as they occur in the stream, which requires efficient maintenance of cluster memberships that change as windows slide.

    Data stream management systems (DSMSs) provide high-level query languages for searching and analyzing streaming data. In this thesis we extend a DSMS with a real-time data stream clustering framework called Generic 2-phase Continuous Summarization framework (G2CS).  G2CS modularizes data stream clustering by taking as input clustering algorithms which are expressed in terms of a number of functions and indexing structures. G2CS supports real-time clustering by efficient window sliding mechanism and algorithm transparent indexing. A particular challenge for real-time detection of a high number of rapidly evolving clusters is efficiency of window slides for clustering algorithms where deletion of expired data is not supported, e.g. BIRCH. To that end, G2CS includes a novel window maintenance mechanism called Sliding Binary Merge (SBM). To further improve real-time sliding performance, G2CS uses generation-based multi-dimensional indexing where indexing structures suitable for the clustering algorithms can be plugged-in.

    Fulltekst (pdf)
    fulltext
    Download (jpg)
    preview image
  • 17.
    Bergquist, Jonatan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Blockchain Technology and Smart Contracts: Privacy-preserving Tools2017Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    The purpose of this Master's thesis is to explore blockchain technology and smart contracts as a way of building privacy-sensitive applications. The main focus is on a medication plan containing prescriptions, built on a blockchain system of smart contracts. This is an example use case, but the results can be transferred to other ones where sensitive data is being shared and a proof of validity or authentication is needed. First the problem is presented, why medication plans are in need of digitalisation and why blockchain technology is a fitting technology for implementing such an application. Then blockchain technology is explained, since it is a very new and relatively unfamiliar IT construct. Thereafter, a design is proposed for solving the problem. A system of smart contracts was built to prove how such an application can be built, and suggested guidelines for how a blockchain system should be designed to fulfil the requirements that were defined. Finally, a discussion is held regarding the applicability of different blockchain designs to the problem of privacy-handling applications.

    Fulltekst (pdf)
    fulltext
  • 18.
    Bernström, Kristian
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Näsman, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Utredning och implementering av en prototyp för integration av Prevas FOCS och ABB 800xA2014Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    ABB and Prevas have initiated a collaboration to sell a system to optimize steel industry furnaces, called FOCS. The purpose of this thesis is to investigate possibilities for integrating Prevas FOCS and ABB 800xA.

    The result of the investigation is used for an implementation of a prototype of the integrated system. The study shows a general method that can be used when implementing two software systems. The prototype of the integrated systems is made with usability principles in mind. This is a very important aspect in order to create a good working environment for the operators of a steel plant. It is also important to follow communication standards when integrating software systems. In an industrial network used in the steel industry OPC is a standard for communication. We recommend ABB and Prevas to follow this standard when possible to make the integration smoother. To keep the cost of the integration to a minimum it is also recommended to reuse existing resources. This can however have a negative effect on usability and it is therefore important to keep a balance between cost and usability.

    The prototype made in this thesis accomplishes the goal of transferring the functionalities used by operators of Prevas FOCS to 800xA so that operators can control the processes using only one integrated system.

    Fulltekst (pdf)
    fulltext
  • 19.
    Björdal, Gustav
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    From Declarative Models to Local Search2021Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    A solver is a general-purpose software for solving optimisation problems. It takes as input a description of a problem, called a model, and uses a collection of algorithms, called its solving technology, to ideally produce an optimal solution as output. Most solvers have a modelling language that cannot be processed by other solvers. This means that there is a risk of making an early commitment to a solver and its technology when writing a model. To address this risk, and to increase the accessibility of solvers, there has been a push for technology-independent modelling languages, a notable one being MiniZinc.

    A model written in MiniZinc is transformed by the MiniZinc toolchain in order to suit a targeted solver and its technology. However, for a solver to process a MiniZinc model, it also requires what is called a backend for MiniZinc. A backend translates the transformed MiniZinc model into the solver’s own modelling language and synthesises any components not in a MiniZinc model that the solver (or its technology) requires.

    The solving technology called constraint-based local search (CBLS) is based on the popular algorithm design methodology called local search, which often quickly produces near-optimal solutions, even to large problems. So, with the advent of CBLS solvers, there is a need for CBLS backends to modelling languages like MiniZinc.

    This thesis contributes to three research topics. First, it shows for the first time how to create a CBLS backend for a technology-independent modelling language, namely MiniZinc, and it shows that CBLS via MiniZinc can be competitive for solving optimisation problems. Second, it extends MiniZinc with concepts from local search, and shows that these concepts can be used even by other technologies towards designing new types of solvers. Third, it extends the utilisation of another technology, namely constraint programming, inside local-search solvers and backends.

    These contributions make local search accessible to all users of modelling languages like MiniZinc, and allow some optimisation problems to be solved more efficiently via such languages.

    Fulltekst (pdf)
    UUThesis-G_Björdal_2021
    Download (jpg)
    presentationsbild
  • 20.
    Björdal, Gustav
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    String Variables for Constraint-Based Local Search2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    String variables occur as a natural part of many computationally challenging problems. Usually, such problems are solved using problem-specific algorithms implemented from first principles, which can be a time-consuming and error-prone task. A constraint solver is a framework that can be used to solve computationally challenging problems by first declaratively defining the problem and then solving it using specialised off-the-shelf algorithms, which can cut down development time significantly and result in faster solution times and higher solution quality. There are many constraint solving technologies, one of which is constraint-based local search (CBLS). However, very few constraint solvers have native support for solving problems with string variables. The goal of this thesis is to add string variables as a native type to the CBLS solver OscaR/CBLS. The implementation was experimentally evaluated on the Closest String Problem and the Word Equation System problem. The evaluation shows that string variables for CBLS can be a viable option for solving string problems. However, further work is required to obtain even more competitive performance.

    Fulltekst (pdf)
    fulltext
  • 21.
    Björdal, Gustav
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Flener, Pierre
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Pearson, Justin
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Generating compound moves in local search by hybridisation with complete search2019Inngår i: Integration of Constraint Programming, Artificial Intelligence, and Operations Research / [ed] L.-M. Rousseau and K. Stergiou, Springer, 2019, Vol. 11494, s. 95-111Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A black-box local-search backend to a solving-technology-independent modelling language, such as MiniZinc, automatically infers from the structure of a declarative model for a satisfaction or optimisation problem a combination of a neighbourhood, heuristic, and meta-heuristic. These ingredients are then provided to a local-search solver, but are manually designed in a handcrafted local-search algorithm. However, such a backend can perform poorly due to model structure that is inappropriate for local search, for example when it considers moves modifying only variables that represent auxiliary information. Towards overcoming such inefficiency, we propose compound-move generation, an extension to local-search solvers that uses a complete-search solver in order to augment moves modifying non-auxiliary variables so that they also modify auxiliary ones. Since compound-move generation is intended to be applied to such models, we discuss how to identify them.

    We present several refinements of compound-move generation and show its very positive impact on several third-party models. This helps reduce the unavoidable gap between black-box local search and local-search algorithms crafted by experts.

  • 22.
    Björdal, Gustav
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Flener, Pierre
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Pearson, Justin
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Stuckey, Peter J.
    Faculty of Information Technology, Monash University, Melbourne, Australia.
    Exploring declarative local-search neighbourhoods with constraint programming2019Inngår i: Principles and Practice of Constraint Programming / [ed] Thomas Schiex and Simon de Givry, Switzerland: Springer, 2019, Vol. 11802, s. 37-53Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Using constraint programming (CP) to explore a local-search neighbourhood was first tried in the mid 1990s. The advantage is that constraint propagation can quickly rule out uninteresting neighbours, sometimes greatly reducing the number actually probed. However, a CP model of the neighbourhood has to be handcrafted from the model of the problem: this can be difficult and tedious. That research direction appears abandoned since large-neighbourhood search (LNS) and constraint-based local search (CBLS) arose as alternatives that seem easier to use. Recently, the notion of declarative neighbourhood was added to the technology-independent modelling language MiniZinc, for use by any backend to MiniZinc, but currently only used by a CBLS backend. We demonstrate that declarative neighbourhoods are indeed technology-independent by using the old idea of CP-based neighbourhood exploration: we explain how to encode automatically a declarative neighbourhood into a CP model of the neighbourhood. This enables us to lift any CP solver into a local-search backend to MiniZinc. Our prototype is competitive with CP, CBLS, and LNS backends to MiniZinc.

  • 23.
    Björklund, Henrik
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Combinatorial Optimization for Infinite Games on Graphs2005Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Games on graphs have become an indispensable tool in modern computer science. They provide powerful and expressive models for numerous phenomena and are extensively used in computer- aided verification, automata theory, logic, complexity theory, computational biology, etc.

    The infinite games on finite graphs we study in this thesis have their primary applications in verification, but are also of fundamental importance from the complexity-theoretic point of view. They include parity, mean payoff, and simple stochastic games.

    We focus on solving graph games by using iterative strategy improvement and methods from linear programming and combinatorial optimization. To this end we consider old strategy evaluation functions, construct new ones, and show how all of them, due to their structural similarities, fit into a unifying combinatorial framework. This allows us to employ randomized optimization methods from combinatorial linear programming to solve the games in expected subexponential time.

    We introduce and study the concept of a controlled optimization problem, capturing the essential features of many graph games, and provide sufficent conditions for solvability of such problems in expected subexponential time.

    The discrete strategy evaluation function for mean payoff games we derive from the new controlled longest-shortest path problem, leads to improvement algorithms that are considerably more efficient than the previously known ones, and also improves the efficiency of algorithms for parity games.

    We also define the controlled linear programming problem, and show how the games are translated into this setting. Subclasses of the problem, more general than the games considered, are shown to belong to NP intersection coNP, or even to be solvable by subexponential algorithms.

    Finally, we take the first steps in investigating the fixed-parameter complexity of parity, Rabin, Streett, and Muller games.

    Fulltekst (pdf)
    FULLTEXT01
    Download (pdf)
    COVER01
  • 24.
    Brandauer, Stephan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Structured Data2018Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    References are a programming language construct that lets a programmer access a datum invariant of its location.

    References permit aliasing -- several references to the same object, effectively making a single object accessible through different names (or paths). Aliasing, especially of mutable data, is both a blessing and a curse: when used correctly, it can make a programmer's life easier; when used incorrectly, for example through accidental aliases that the programmer is unaware of, aliasing can lead to hard to find bugs, and hard to verify programs.

    Aliases allow us to build efficient data structures by connecting objects together, making them immediately reachable. Aliases are at the heart of many useful programming idioms. But with great power comes great responsibility: unless a programmer carefully manages aliases in a program, aliases propagate changes and make parts of a program's memory change seemingly for no reason. Additionally, such bugs are very easy to make but very hard to track down.

    This thesis presents an overview of techniques for controlling how, when and if data can be aliased, as well as how and if data can be mutated. Additionally, it presents three different projects aimed at conserving the blessings, but reducing the curses. The first project is disjointness domains, a type system for expressing intended aliasing in a fine-grained manner so that aliasing will not be unexpected; the second project is Spencer, a tool to flexibly and precisely analyse the use of aliasing in programs to improve our understanding of how aliasing of mutable data is used in practise; and the third project is c flat, an approach for implementing high-level collection data structures using a richer reference construct that reduces aliasing problems but still retains many of aliasing's benefits.

    Fulltekst (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 25.
    Bylund, Markus
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Personal service environments: Openness and user control in user-service interaction2001Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    This thesis describes my work with making the whole experience of using electronic services more pleasant and practical. More and more people use electronic services in their daily life — be it services for communicating with colleagues or family members, web-based bookstores, or network-based games for entertainment. However, electronic services in general are more difficult to use than they would have to be. They are limited in how and when users can access them. Services do not collaborate despite obvious advantages to their users, and they put the integrity and privacy of their users at risk.

    In this thesis, I argue that there are structural reasons for these problems rather than problems with content or the technology per se. The focus when designing electronic services tends to be on the service providers or on the artifacts that are used for accessing the services. I present an approach that focus on the user instead, which is based on the concept of personal service environments. These provide a mobile locale for storing and running electronic services of individual users. This gives the user increased control over which services to use, from where they can be accessed, and what personal information that services gather. The concept allows, and encourages, service collaboration, but not without letting the user maintain the control over the process. Finally, personal service environments allow continuous usage of services while switching between interaction devices and moving between places.

    The sView system, which is also described, implements personal service environments and serves as an example of how the concept can be realized. The system consists of two parts. The first part is a specification of how both services for sView and infrastructure for handling services should be developed. The second part is a reference implementation of the specification, which includes sample services that adds to and demonstrates the functionality of sView.

    Fulltekst (pdf)
    fulltext
  • 26.
    Byström, Adam
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    From Intent to Code: Using Natural Language Processing2017Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Programming and the possibility to express one’s intent to a machine is becoming a very important skill in our digitalizing society. Today, instructing a machine, such as a computer to perform actions is done through programming. What if this could be done with human language? This thesis examines how new technologies and methods in the form of Natural Language Processing can be used to make programming more accessible by translating intent expressed in natural language into code that a computer can execute. Related research has studied using natural language as a programming language and using natural language to instruct robots. These studies have shown promising results but are hindered by strict syntaxes, limited domains and inability to handle ambiguity. Studies have also been made using Natural Language Processing to analyse source code, turning code into natural language. This thesis has the reversed approach. By utilizing Natural Language Processing techniques, an intent can be translated into code containing concepts such as sequential execution, loops and conditional statements. In this study, a system for converting intent, expressed in English sentences, into code is developed. To analyse this approach to programming, an evaluation framework is developed, evaluating the system during the development process as well as usage of the final system. The results show that this way of programming might have potential but conclude that the Natural Language Processing models still have too low accuracy. Further research is required to increase this accuracy to further assess the potential of this way of programming. 

    Fulltekst (pdf)
    fulltext
  • 27.
    Carlsson, Elin
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Mattsson, Moa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    The MaRiQ model: A quantitative approach to risk management2019Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    In recent years, cyber attacks and data fraud have become major issues to companies, businesses and nation states alike. The need for more accurate and reliable risk management models is therefore substantial.

    Today, cybersecurity risk management is often carried out on a qualitative basis, where risks are evaluated to a predefined set of categories such as low, medium or high. This thesis aims to challenge that practice, by presenting a model that quantitatively assesses risks - therefore named MaRiQ (Manage Risks Quantitatively).

    MaRiQ was developed based on collected requirements and contemporary literature on quantitative risk management. The model consists of a clearly defined flowchart and a supporting tool created in Excel. To generate scientifically validated results, MaRiQ makes use of a number of statistical techniques and mathematical functions, such as Monte Carlo simulations and probability distributions.

    To evaluate whether our developed model really was an improvement compared to current qualitative processes, we conducted a workshop at the end of the project. The organization that tested MaRiQexperienced the model to be useful and that it fulfilled most of their needs.

    Our results indicate that risk management within cybersecurity can and should be performed using more quantitative approaches than what is praxis today. Even though there are several potential developments to be made, MaRiQ demonstrates the possible advantages of transitioning from qualitative to quantitative risk management processes.

    Fulltekst (pdf)
    fulltext
  • 28.
    Carlsson, Per
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Algorithms for Electronic Power Markets2004Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    In this thesis we focus resource allocation problems and electronic markets in particular. The main application area of ours is electricity markets. We present a number of algorithms and include practical experience.

    There is an ongoing restructuring of power markets in Europe and elsewhere, this implies that an industry that previously has been viewed as a natural monopoly becomes exposed to competition. In the thesis we move a step further suggesting that end users should take active part in the trade on power markets such as (i) day-ahead markets and (ii) markets handling close to real-time balancing of power grids. Our ideas and results can be utilised (a) to increase the efficiency of these markets and (b) to handle strained situations when power systems operate at their limits. For this we utilise information and communication technology available today and develop electronic market mechanisms designed for large numbers of participants typically distributed over a power grid.

    The papers of the thesis cover resource allocation with separable objective functions, a market mechanism that accepts actors with discontinuous demand, and mechanisms that allow actors to express combinatorial dependencies between traded commodities on multi-commodity markets. Further we present results from field tests and simulations.

    Fulltekst (pdf)
    FULLTEXT01
  • 29.
    Carlsson, Per
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Market and resource allocation algorithms with application to energy control2001Licentiatavhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    The energy markets of today are markets with rather few active participants. The participants are, with few exceptions, large producers and distributors. The market mechanisms that are used are constructed with this kind of a market situation in mind. With an automatic or semiautomatic approach, the market mechanism would be able to incorporate a larger number of participants. Smaller producers, and even consumers, could take an active part in the market. The gain is in more efficient markets, and – due to smaller fluctuations in demand – better resource usage from an environmental perspective.

    The energy markets of the Nordic countries (as well as many others) were deregulated during the last few years. The change has been radical and the situation is still rather new. We believe that the market can be made more efficient with the help of the dynamics of the small actors.

    The idealised world of theory (of economics) often relies on assumptions such as continuous demand and supply curves. These assumptions are useful, and they do not introduce problems in the power market situation of today, with relatively few, large, participants. When consumers and small producers are introduced on the market, the situation is different. Then it is a drawback if the market mechanims cannot handle discontinuous supply and demand.

    The growth in accessibility to computational power and data communications that we have experienced in the last years (and are experiencing) could be utilised when constructing mechanisms for the energy markets of tomorrow.

    In this thesis we suggest a new market mechanism, ConFAst, that utilises the technological progress to make it possible to incorporate a large number of active participants on the market. The mechanism does not rely on the assumptions above. The gain is a more efficient market with less fluctuations in demand over the day.

    To make this possible there is a need for efficient algorithms, in particular this mechanism relies on an efficient aggregation algorithm. An algorithm for aggregation of objective functions is part of this thesis. The algorithm handles maximisation with nonconcave, even noisy, objective functions. Experimental results show that the approach, in practically relevant cases, is significantly faster than the standard algorithm.

    Fulltekst (pdf)
    fulltext
  • 30.
    Castegren, Elias
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Capability-Based Type Systems for Concurrency Control2018Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Since the early 2000s, in order to keep up with the performance predictions of Moore's law, hardware vendors have had to turn to multi-core computers. Today, parallel hardware is everywhere, from massive server halls to the phones in our pockets. However, this parallelism does not come for free. Programs must explicitly be written to allow for concurrent execution, which adds complexity that is not present in sequential programs. In particular, if two concurrent processes share the same memory, care must be taken so that they do not overwrite each other's data. This issue of data-races is exacerbated in object-oriented languages, where shared memory in the form of aliasing is ubiquitous. Unfortunately, most mainstream programming languages were designed with sequential programming in mind, and therefore provide little or no support for handling this complexity. Even though programming abstractions like locks can be used to synchronise accesses to shared memory, the burden of using these abstractions correctly and efficiently is left to the programmer.

    The contribution of this thesis is programming language technology for controlling concurrency in the presence of shared memory. It is based on the concept of reference capabilities, which facilitate safe concurrent programming by restricting how memory may be accessed and shared. Reference capabilities can be used to enforce correct synchronisation when accessing shared memory, as well as to prevent unsafe sharing when using more fine-grained concurrency control, such as lock-free programming. This thesis presents the design of a capability-based type system with low annotation overhead, that can statically guarantee the absence of data-races without giving up object-oriented features like aliasing, subtyping and code reuse. The type system is formally proven safe, and has been implemented for the highly concurrent object-oriented programming language Encore.

    Fulltekst (pdf)
    fulltext
    Download (jpg)
    preview image
  • 31.
    Castegren, Elias
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Wrigstad, Tobias
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Encore: Coda2023Inngår i: State-of-Art volume on Active Objects, Springer Nature, 2023Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    Encore is a programming language that was developed between 2014  and 2019. Encore was designed following the principle of  inversion of defaults: computations are concurrent (rather than  sequential) by default; data is isolated (rather than freely  sharable) by default. The language worked as a seedbed for a  large number of research ideas aimed at making programming with  active objects safe, expressive and efficient.

    Encore allows active objects to share data but statically  ensures the absence of data races and allows fully concurrent  garbage collection. Active objects can synchronize using  first-class futures, which are also used to delegate and  coalesce computations across active objects. The type system  also supports orchestration of intra-object parallelism,  expressed using composable units of computation. Active objects  which see a lot of traffic can turn themselves into passive  objects protected by lock-free synchronization mechanisms to  avoid performance bottle-necks, while still facilitating safe  sharing and concurrent garbage collection.

    This paper gives an overview of these features of Encore,  reflecting on lessons learned from trying to fit all of these  research ideas into a single language.

  • 32. Cheeseman, Luke
    et al.
    Parkinson, Matthew J.
    Clebsch, Sylvan
    Kogias, Marios
    Drossopoulou, Sophia
    Chisnall, David
    Wrigstad, Tobias
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Liétar, Paul
    When Concurrency Matters: Behaviour-Oriented Concurrency2023Inngår i: Proceedings of the ACM on Programming Languages, E-ISSN 2475-1421, Vol. 7, nr OOPSLA2, s. 1531-1560Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Expressing parallelism and coordination is central for modern concurrent programming. Many mechanisms exist for expressing both parallelism and coordination. However, the design decisions for these two mechanisms are tightly intertwined. We believe that the interdependence of these two mechanisms should be recognised and achieved through a single, powerful primitive. We are not the first to realise this: the prime example is actor model programming, where parallelism arises through fine-grained decomposition of a program’s state into actors that are able to execute independently in parallel. However, actor model programming has a serious pain point: updating multiple actors as a single atomic operation is a challenging task. We address this pain point by introducing a new concurrency paradigm: Behaviour-Oriented Concurrency (BoC). In BoC, we are revisiting the fundamental concept of a behaviour to provide a more transactional concurrency model. BoC enables asynchronously creating atomic and ordered units of work with exclusive access to a collection of independent resources. In this paper, we describe BoC informally in terms of examples, which demonstrate the advantages of exclusive access to several independent resources, as well as the need for ordering. We define it through a formal model. We demonstrate its practicality by implementing a C++ runtime. We argue its applicability through the Savina benchmark suite: benchmarks in this suite can be more compactly represented using BoC in place of Actors, and we observe comparable, if not better, performance.

    Fulltekst (pdf)
    fulltext
  • 33.
    Dagfalk, Johanna
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Kyhle, Ellen
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Listening in on Productivity: Applying the Four Key Metrics to measure productivity in a software development company2021Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Software development is an area in which companies not only need to keep up with the latest technology, but they additionally need to continuously increase their productivity to stay competitive in the industry. One company currently facing these challenges is Storytel - one of the strongest players on the Swedish audiobook market - with about a fourth of all employees involved with software development, and a rapidly growing workforce.

    With the purpose of understanding how the Storytel Tech Department is performing, this thesis maps Storytel’s productivity defined through the Four Key Metrics - Deployment Frequency, Delivery Lead Time, Mean Time To Restore and Change Fail Rate. A classification is made into which performance category (Low, Medium, High, Elite) the Storytel Tech Department belongs to through a deep-dive into the raw system data existing at Storytel, mainly focusing on the case management system Jira. A survey of the Tech Department was conducted, to give insights into the connection between human and technical factors influencing productivity (categorized into Culture, Environment, and Process) and estimated productivity. Along with these data collections, interviews with Storytel employees were performed to gather further knowledge about the Tech Department, and to understand potential bottlenecks and obstacles.

    All Four Key Metrics could be determined based on raw system data, except the metric Mean Time To Restore which was complemented by survey estimates. The generalized findings of the Four Key Metrics conclude that Storytel can be minimally classified as a ‘medium’ performer. The factors, validated through factor analysis, found to have an impact on the Four Key Metrics were Generative Culture, Efficiency (Automation and Shared Responsibility) and Number of Projects. Lastly, the major bottlenecks found were related to Architecture, Automation, Time Fragmentation and Communication.

    The thesis contributes with interesting findings from an expanding, middle-sized, healthy company in the audiobook streaming industry - but the results can be beneficial for other software development companies to learn from as well. Performing a similar study with a greater sample size, and additionally enabling comparisons between teams, is suggested for future research.

    Fulltekst (pdf)
    fulltext
  • 34.
    Dahlin, Niklas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Optimering av takttidtabell för järnvägstrafik på en regional nivå: En fallstudie av fyra Mälarstäder2018Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Effective commuting is an important part of regional development and attractiveness, where railway traffic is a favourable mode of transportation owing to it being energy efficient and environmentally friendly. Attaining more people to choose the train to commute is therefore desirable. A concept aiming to increase the use of railway traffic is cyclic timetable. At present the concept is most frequently used on a national level but there are possibilities to implement the ideas on a more regional level.

    The purpose of this thesis is to study if and how a cyclic timetable for railway traffic can be constructed and optimised for a region, more specifically the region “Fyra Mälarstäder”. Challenges and opportunities to implement this type of timetable on a regional level are also discussed.

    In order to construct a timetable for railway traffic several infrastructural limitations must be taken into account. An example that extensively limits railway capacity is single tracks. Hence, to be able to construct and optimise the timetable these limitations were formulated, together with a number of other criteria, mathematically as constraints for an optimisation problem. For the optimisation setup the objective function consisted of a sum of weighted trip times within the system, which in turn was minimised.

    Results conclude that a cyclic timetable could successfully be used for “Fyra Mälarstäder”. However, some aspects remain to be investigated, including train line continuation beyond the system boundaries of the study. As for the optimisation, it appears that the weighting of the objective function plays a considerable role to obtain a satisfying timetable. Varying and adjusting certain parameters may also be favourable to achieve a timetable as beneficial as possible.

    Fulltekst (pdf)
    fulltext
  • 35.
    Daniels, Mats
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datavetenskapens didaktik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Pears, Arnold
    Dept of Learning in Engineering Sciences, KTH Royal Institute of Technology, Stockholm, Sweden.
    Nylén, Aletta
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datavetenskapens didaktik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Inspiring Computational Thinking: A Science Fair Activity2021Inngår i: 2021 IEEE Frontiers in Education Conference (FIE), Institute of Electrical and Electronics Engineers (IEEE), 2021Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This Research Full Paper introduces a model for structuring playful, challenge-based learning activities drawing on a process of decontextualisation of a challenge to access its Computational Thinking intellectual and conceptual components and subsequent computationalisation of these components into computational artifacts that are recontextualised to render them attractive and accessible to school pupils. The case study follows a thick description approach to evaluating the engagement potential of the instructional design, as well as the didactic choices made during implementation. We conclude that Bebras cards and Dash robots provide considerable support for a playful engagement with computational concepts and engaging children with different scientific backgrounds in Computational Thinking. In particular, flexibility in how they are used and easy adaptation of challenge level make them useful in contexts with broad participation. Additionally we find that using robots to provide a link between the theoretical presentation of CT in the Bebras card and a physical representation and programming challenge is engaging and helps participants to focus on algorithmic concepts.

  • 36.
    Demmelmaier, Gustav
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Westerberg, Carl
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Data Segmentation Using NLP: Gender and Age2021Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Natural language processing (NLP) opens the possibilities for a computer to read, decipher, and interpret human languages to eventually use it in ways that enable yet further understanding of the interaction and communication between the human and the computer. When appropriate data is available, NLP makes it possible to determine not only the sentiment information of a text but also information about the author behind an online post. Previously conducted studies show aspects of NLP potentially going deeper into the subjective information, enabling author classification from text data.

    This thesis addresses the lack of demographic insights of online user data by studying language use in texts. It compares four popular yet diverse machine learning algorithms for gender and age segmentation. During the project, the age analysis was abandoned due to insufficient data. The online texts were analysed and quantified into 118 parameters based on linguistic differences. Using supervised learning, the researchers succeeded in correctly predicting the gender in 82% of the cases when analysing data from English online users. The training and test data may have some correlations, which is important to notice. Language is complex and, in this case, the more complex methods SVM and Neural networks were performing better than the less complex Naive Bayes and Logistic regression.

    Fulltekst (pdf)
    fulltext
  • 37.
    Edqvist, Samuel
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Scheduling Physicians using Constraint Programming2008Independent thesis Advanced level (degree of Master (One Year)), 20 poäng / 30 hpOppgave
    Fulltekst (pdf)
    FULLTEXT02
  • 38.
    Elvander, Adam
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Developing a Recommender System for a Mobile E-commerce Application2015Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    This thesis describes the process of conceptualizing and developing a recommendersystem for a peer-to-peer commerce application. The application in question is calledPlick and is a vintage clothes marketplace where private persons and smaller vintageretailers buy and sell secondhand clothes from each other. Recommender systems is arelatively young field of research but has become more popular in recent years withthe advent of big data applications such as Netflix and Amazon. Examples ofrecommender systems being used in e-marketplace applications are however stillsparse and the main contribution of this thesis is insight into this sub-problem inrecommender system research. The three main families of recommender algorithmsare analyzed and two of them are deemed unfitting for the e-marketplace scenario.Out of the third family, collaborative filtering, three algorithms are described,implemented and tested on a large subset of data collected in Plick that consistsmainly of clicks made by users on items in the system. By using both traditional andnovel evaluation techniques it is further shown that a user-based collaborative filteringalgorithm yields the most accurate recommendations when compared to actual userbehavior. This represents a divergence from recommender systems commonly usedin e-commerce applications. The paper concludes with a discussion on the cause andsignificance of this difference and the impact of certain data-preprocessing techniqueson the results.

    Fulltekst (pdf)
    fulltext
  • 39.
    Engvall, Maja
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Data-driven cost management for a cloud-based service across autonomous teams2017Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Spotify started to use the cloud-based data warehouse BigQuery in 2016 with a business model of pay-as-you-go. Since then, usage has increased rapidly in volume and amount of users across the organisation which is a result of ease of use compared to previous data warehouse solutions. The technology procurement team lacks an overview of how BigQuery is used across Spotify and a strategy of how to maintain an environment where users make cost informed decisions when

    designing queries and creating tables. Incidents resulting in unexpected high bills are currently handled by a capacity analyst using billing data which is lacking the granularity of how cost maps to the users of BigQuery.

    The objective of this research is to provide recommendations on how audit data can enable a data driven cost-effective environment for BigQuery across the matrix formed engineering organisation at Spotify. First an overview of the current usage patterns is presented based on audit data which is modeled with regards to volume, complexity and utilization. Different patterns are identified using K-means

    clustering, including high-volume consuming squads and underutilized tables. Secondly, recommendations on transparency of audit data for cost-effectiveness are based on insights from cluster analysis, interviews and characteristics of organisation structure.

    Recommendations include transparency of data consumption to producers to prevent paying for unused resources and transparency to consumers on usage patterns to avoid paying for unexpected bills. Usage growth is recommended to be presented to the technology procurement squad which enables better understanding and mitigates challenges on cost forecasting and control.

    Fulltekst (pdf)
    fulltext
  • 40.
    Eriksson, Benjamin
    et al.
    Chalmers Univ Technol, Gothenburg, Sweden..
    Stjerna, Amanda
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    De Masellis, Riccardo
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Rümmer, Philipp
    Univ Regensburg, Regensburg, Germany..
    Sabelfeld, Andrei
    Chalmers Univ Technol, Gothenburg, Sweden..
    Black Ostrich: Web Application Scanning with String Solvers2023Inngår i: CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security / [ed] Weizhi Meng; Christian D. Jensen; Cas Cremers; Engin Kirda, Association for Computing Machinery (ACM), 2023, s. 549-563Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Securing web applications remains a pressing challenge. Unfortunately, the state of the art in web crawling and security scanning still falls short of deep crawling. A major roadblock is the crawlers' limited ability to pass input validation checks when web applications require data of a certain format, such as email, phone number, or zip code. This paper develops Black Ostrich, a principled approach to deep web crawling and scanning. The key idea is to equip web crawling with string constraint solving capabilities to dynamically infer suitable inputs from regular expression patterns in web applications and thereby pass input validation checks. To enable this use of constraint solvers, we develop new automata-based techniques to process JavaScript regular expressions. We implement our approach extending and combining the Ostrich constraint solver with the Black Widow web crawler. We evaluate Black Ostrich on a set of 8,820 unique validation patterns gathered from over 21,667,978 forms from a combination of the July 2021 Common Crawl and Tranco top 100K. For these forms and reconstructions of input elements corresponding to the patterns, we demonstrate that Black Ostrich achieves a 99% coverage of the form validations compared to an average of 36% for the state-of-the-art scanners. Moreover, out of the 66,377 domains using these patterns, we solve all patterns on 66,309 (99%) while the combined efforts of the other scanners cover 52,632 (79%). We further show that our approach can boost coverage by evaluating it on three open-source applications. Our empirical studies include a study of email validation patterns, where we find that 213 (26%) out of the 825 found email validation patterns liberally admit XSS injection payloads.

    Fulltekst (pdf)
    FULLTEXT01
  • 41.
    Eriksson, Lars-Henrik
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Verification and generation of geographical data using domain theory2005Inngår i: TRain Workshop at the 3rd IEEE International Conference on Software Engineering and Formal Methods (SEFM’05), 2005, 2005Konferansepaper (Annet vitenskapelig)
    Fulltekst (pdf)
    Extended abstract
  • 42.
    Fernandez-Reyes, Kiko
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Gariano, Isaac Oscar
    Noble, James
    Greenwood-Thessman, Erin
    Homer, Michael
    Wrigstad, Tobias
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Dala: a simple capability-based dynamic language design for data race-freedom2021Inngår i: Onward! 2021: Proceedings of the 2021 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software / [ed] Wolfgang De Meuter, Elisa Baniassad, 2021, s. 1-17Konferansepaper (Fagfellevurdert)
  • 43.
    Fernández Reyes, Francisco Ramón
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Abstractions to Control the Future2021Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Multicore and manycore computers are the norm nowadays, and users have expectations that their programs can do multiple things concurrently. To support that, developers use concur- rency abstractions such as threads, promises, futures, and/or channels to exchange information. All these abstractions introduce trade-offs between the concurrency model and the language guarantees, and developers accept these trade-offs for the benefits of concurrent programming.

    Many concurrent languages are multi-paradigm, e.g., mix the functional and object-oriented paradigms. This is beneficial to developers because they can choose the most suitable approach when solving a problem. From the point of view of concurrency, purely functional programming languages are data-race free since they only support immutable data. Object-oriented languages do not get a free lunch, and neither do multi-paradigm languages that have imperative features.

    The main problem is uncontrolled concurrent access to shared mutable state, which may inadvertently introduce data-races. A data-race happens when two concurrent memory operations target the same location, at least one of them is a write, and there is no synchronisation operation involved. Data-races make programs to exhibit (unwanted) non-deterministic behaviour.

    The contribution of this thesis is two-fold. First, this thesis introduces new concurrent abstractions in a purely functional, statically typed programming language (Paper I – Paper III); these abstractions allow developers to write concurrent control- and delegation-based patterns. Second, this thesis introduces a capability-based dynamic programming model, named Dala, that extends the applicability of the concurrent abstractions to an imperative setting while maintaining data-race freedom (Paper IV). Developers can also use the Dala model to migrate unsafe programs, i.e., programs that may suffer data-races, to data-race free programs.

    Fulltekst (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 44.
    Fomkin, Ruslan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Optimization and Execution of Complex Scientific Queries2009Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    Large volumes of data produced and shared within scientific communities are analyzed by many researchers to investigate different scientific theories. Currently the analyses are implemented in traditional programming languages such as C++. This is inefficient for research productivity, since it is difficult to write, understand, and modify such programs. Furthermore, programs should scale over large data volumes and analysis complexity, which further complicates code development.

    This Thesis investigates the use of database technologies to implement scientific applications, in which data are complex objects describing measurements of independent events and the analyses are selections of events by applying conjunctions of complex numerical filters on each object separately. An example of such an application is analyses for the presence of Higgs bosons in collision events produced by the ATLAS experiment. For efficient implementation of such an ATLAS application, a new data stream management system SQISLE is developed. In SQISLE queries are specified over complex objects which are efficiently streamed from sources through the query engine. This streaming approach is compared with the conventional approach to load events into a database before querying. Since the queries implementing scientific analyses are large and complex, novel techniques are developed for efficient query processing. To obtain efficient plans for such queries SQISLE implements runtime query optimization strategies, which during query execution collect runtime statistics for a query, reoptimize the query using the collected statistics, and dynamically switch optimization strategies. The cost-based optimization utilizes a novel cost model for aggregate functions over nested subqueries. To alleviate estimation errors in large queries the fragments are decomposed into conjunctions of subqueries over which runtime statistics are measured. Performance is further improved by query transformation, view materialization, and partial evaluation. ATLAS queries in SQISLE using these query processing techniques perform close to or better than hard-coded C++ implementations of the same analyses.

    Scientific data are often stored in Grids, which manage both storage and computational resources. This Thesis includes a framework POQSEC that utilizes Grid resources to scale scientific queries over large data volumes by parallelizing the queries and shipping the data management system itself, e.g. SQISLE, to Grid computational nodes for the parallel query execution.

    Fulltekst (pdf)
    FULLTEXT01
  • 45.
    Forghani, Kamran
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Carlsson, Mats
    Rise.
    Flener, Pierre
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Fredriksson, Magnus
    Luleå tekniska universitet, Träteknik.
    Pearson, Justin
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Yuan, Di
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Maximizing Value Yield in Wood Industry through Flexible Sawing and Product Grading Based on Wane and Log Shape2024Inngår i: Computers and Electronics in Agriculture, ISSN 0168-1699, E-ISSN 1872-7107, Vol. 216, artikkel-id 108513Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The optimization of sawing processes in the wood industry is critical for maximizing efficiency and profitability. The introduction of computerized tomography scanners provides sawmill operators with three-dimensional internal models of logs, which can be used to assess value and yield more accurately. We present a methodology for solving the sawing optimization problem employing a flexible sawing scheme that allows greater flexibility in cutting logs into products while considering product quality classes influenced by wane defects. The methodology has two phases: preprocessing and optimization. In the preprocessing phase, two alternative algorithms are given that generate and evaluate the potential sawing positions of products by considering the 3D surface of the log, product size requirements, and product quality classes. In the optimization phase, a maximum set-packing problem is solved for the preprocessed data using mixed-integer programming (MIP), aiming to obtain a feasible cut pattern that maximizes value yield. This is implemented in a system named FlexSaw, which takes advantage of parallel computation during the preprocessing phase and utilizes a MIP solver during the optimization phase. The proposed sawing methods are evaluated on the Swedish Pine Stem Bank. Additionally, FlexSaw is compared with an existing tool that utilizes cant sawing. Results demonstrate the superiority of flexible sawing. While the practical feasibility of implementing a flexible way of sawing logs is constrained by the limitations of current sawmill machinery, the potential increase in yield promotes the exploration of alternative machinery in the wood industry.

  • 46.
    Francisco Rodríguez, María Andreína
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Analysis, synthesis and application of automaton-based constraint descriptions2017Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Constraint programming (CP) is a technology in which a combinatorial problem is modelled as a conjunction of constraints on variables ranging over given initial domains, and optionally an objective function on the variables. Such a model is given to a general-purpose solver performing systematic search to find constraint-satisfying domain values for the variables, giving an optimal value to the objective function. A constraint predicate (also known as a global constraint) does two things: from the modelling perspective, it allows a modeller to express a commonly occurring combinatorial substructure, for example that a set of variables must take distinct values; from the solving perspective, it comes with a propagation algorithm, called a propagator, which removes some but not necessarily all impossible values from the current domains of its variables when invoked during search.

    Although modern CP solvers have many constraint predicates, often a predicate one would like to use is not available. In the past, the choices were either to reformulate the model or to write one's own propagator. In this dissertation, we contribute to the automatic design of propagators for new predicates.

    Integer time series are often subject to constraints on the aggregation of the features of all maximal occurrences of some pattern. For example, the minimum width of the peaks may be constrained. Automata allow many constraint predicates for variable sequences, and in particular many time-series predicates, to be described in a high-level way. Our first contribution is an algorithm for generating an automaton-based predicate description from a pattern, a feature, and an aggregator.

    It has previously been shown how to decompose an automaton-described constraint on a variable sequence into a conjunction of constraints whose predicates have existing propagators. This conjunction provides the propagation, but it is unknown how to propagate it efficiently. Our second contribution is a tool for deriving, in an off-line process, implied constraints for automaton-induced constraint decompositions to improve propagation. Further, when a constraint predicate functionally determines a result variable that is unchanged under reversal of a variable sequence, we provide as our third contribution an algorithm for deriving an implied constraint between the result variables for a variable sequence, a prefix thereof, and the corresponding suffix.

    Fulltekst (pdf)
    fulltext
    Download (pdf)
    errata
    Download (jpg)
    presentationsbild
  • 47.
    Fransson, Moa
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Fåhraeus, Lisa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Finding Patterns in Vehicle Diagnostic Trouble Codes: A data mining study applying associative classification2015Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    In Scania vehicles, Diagnostic Trouble Codes (DTCs) are collected while driving, later on loaded into a central database when visiting a workshop. These DTCs are statistically used to analyse vehicles’ health statuses, which is why correctness in data is desirable. In workshops DTCs can however occur due to work and tests. Nevertheless are they loaded into the database without any notification. In order to perform an accurate analysis of the vehicle health status it would be desirable if such DTCs could be found and removed. The thesis has examined if this is possible by searching for patterns in DTCs, indicating whether the DTCs are generated in a workshop or not. Due to its easy interpretable outcome an Associative Classification method was used with the aim of categorising data. The classifier was built applying well-known algorithms and then two classification algorithms were developed to fit the data structure when labelling new data. The final classifier performed with an accuracy above 80 percent where no distinctive differences between the two algorithms could be found. Hardly 50 percent of all workshop DTCs were however found. The conclusion is that either do patterns in workshop DTCs only occur in 50 percent of the cases, or the classifier can only detect 50 percent of them. The patterns found could confirm previous knowledge regarding workshop generated DTCs as well as provide Scania with new information. 

    Fulltekst (pdf)
    Finding Patterns in Vehicle Diagnostic Trouble Codes
  • 48.
    Fredriksson, Susanne
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Auktorisation och ackreditering inom Försvarsmakten: En studie i nyttan av en standardiserad process för att hantera informationssäkerhet2011Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Information Technology is an essential part of the society today, not least in large organisations dealing with sensitive information. An example of such an organisation is the Swedish Armed Forces which indeed is in the need of ways to ensure information security in their Information Technology systems. The means which is used is an authorisation and accreditation process.

    All Information Technology systems go through a life cycle which includes realisation, usage, development and liquidation. In the Swedish Armed Forces the lifecycle is an authorisation process. Each step in the process is followed by an authorisation decision and one of these steps is accreditation. Accreditation is a formal approval to put the system in operation.

    The aim of the thesis is to study how large organisations may ensure information security when developing IT and the importance of a standardised process to handle information security. The study has been carried out by comparing the process of the Swedish Armed Forces with other methods to run projects and theories concerning development of IT.

    Interviews with Information Security Consultants at Combitech AB along with a study of documentation have been carried out in order to study the process. The theoretical framework has been formed through a literature study of project models and methods for development of IT.

    The main result of the thesis is that a standardised process which manage quality, communication, traceability, complexity, aim, operation and liquidation plan, risk assessment as well as use case increases the chance of a successful project and the achievement of information security in development of new IT. High quality in the formation of the security aim and the specification of requirements, along with tests to establish that all requirements are fulfilled, are crucial when it comes to information security.

    Fulltekst (pdf)
    FULLTEXT01
  • 49.
    Frisberg, Olle
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Placement support for signal intelligence units2019Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    The goal of this thesis was to develop an optimization model that automatically finds optimal groupings for signal intelligence units, aiming to maximize surveillance capability in a user-defined target area. Consideration was taken to transportation possibilities, type of terrain, and the requirement of radio communication between the direction finders. Three scenarios were tested, each providing its own topographical challenges. Several different derivative-free optimization methods were implemented and evaluated, including global methods to find approximate groupings using a geometrical model that was developed, and the local method pattern search that was using the already existing model. Particle swarm and a genetic algorithm turned out to be the best global solvers. A grouping found by the global method was later improved by pattern search by evaluating possible groupings nearby. The greatest practical challenge for particle swarm and pattern search was the ability to find feasible placement points given a desired direction and step length. Workarounds were developed, allowing for more dynamic search patterns. For future use, the placement support should be tested on more scenarios with different prerequisites, and the approved terrain types have to be adjusted according to the kind of vehicle carrying the direction finder.

    Fulltekst (pdf)
    Placement_support_for_signal_intelligence_units
  • 50.
    Gutkovas, Ramunas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Advancing concurrent system verification: Type based approach and tools2014Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Concurrent systems, i.e., systems of parallel processes, are nearly ubiquitous and verifying the correctness of such systems is becoming an important subject. Many formalisms were invented for such purpose, however, new types of systems are introduced and there is a need for handling larger systems. One examples is wireless sensor networks that are being deployed in increasing numbers in various areas; and in particular safety-critical areas, e.g., bush fire detection. Thus, ensuring their correctness is important.

    A process calculus is a formal language for modeling concurrent systems. The pi-calculus is a prominent example of such a language featuring message-passing concurrency. Psi-calculi is a parametric framework that extends the pi-calculus with arbitrary data and logics. Psi-calculi feature a universal theory with its results checked in an automated theorem prover ensuring their correctness.

    In this thesis, we extend psi-calculi expressiveness and modeling precision by introducing a sort system and generalised pattern matching. We show that the extended psi-calculi enjoy the same meta-theoretical results.

    We have developed the Pwb, a tool for the psi-calculi framework. The tool provides a high-level interactive symbolic execution and automated behavioral equivalence checking. We exemplify the use of the tool by developing a high-level executable model of a data collection protocol for wireless sensor networks.

    We are the first to introduce a session types based system for systems with unreliable communication. Remarkably, we do not need to add specific extensions to the types to accommodate such systems. We prove the standard desirable properties for type systems hold also for our type system.

    Fulltekst (pdf)
    fulltext
123 1 - 50 of 132
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf