Change search
Refine search result
12 1 - 50 of 57
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Oldest first
  • Newest first
Select all
  • 1.
    Adwent, Ann-Kristin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Shared IT Service Center i kommuner2012Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    To be able to maintain an adequate IT

    service for various users and needs,

    more municipalities are looking at the

    possibility for mergers of local IT

    departments. One solution for merging

    multiple services/functions and creating

    synergy opportunities is called Shared

    Service Center (SSC). The concept of SSC

    is that the administrative service

    itself becomes a core activity within

    the organization. The aim of this thesis

    is to provide municipalities who are

    considering a merging of their local IT

    departments with recommendations on how

    to design the Shared IT Service Center.

    Recommendations are outlined based on an

    analysis of IT-management literature,

    reports and by conducting a study on

    three ongoing collaborations.

    The conclusions drawn from the study

    suggest guidelines for the design of a

    Shared IT Service Center for

    municipalities. These are as following:

    Create a vision associated with a

    specific and structured target state.

    Identify needs for different target

    groups in municipalities and set a

    common standard.

    Create a clear and practical model/SLA

    appearance of the cooperation and

    agreement.

    Ensure the individual municipalities

    commissioning skills in order to not

    lose it in the context of a common IT

    operation.

    Find an organization that is democratic

    for member municipalities and

    facilitates long-term partnership.

    Specify the operation and maintenance so

    that it can be regulated and controlled

    Establish a common help desk.

    Establish a common standard and

    consolidated infrastructure before the

    introduction of a common technology platform.

  • 2.
    Andrejev, Andrej
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Semantic Web Queries over Scientific Data2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Semantic Web and Linked Open Data provide a potential platform for interoperability of scientific data, offering a flexible model for providing machine-readable and queryable metadata. However, RDF and SPARQL gained limited adoption within the scientific community, mainly due to the lack of support for managing massive numeric data, along with certain other important features – such as extensibility with user-defined functions, query modularity, and integration with existing environments and workflows.

    We present the design, implementation and evaluation of Scientific SPARQL – a language for querying data and metadata combined, represented using the RDF graph model extended with numeric multidimensional arrays as node values – RDF with Arrays. The techniques used to store RDF with Arrays in a scalable way and process Scientific SPARQL queries and updates are implemented in our prototype software – Scientific SPARQL Database Manager, SSDM, and its integrations with data storage systems and computational frameworks. This includes scalable storage solutions for numeric multidimensional arrays and an efficient implementation of array operations. The arrays can be physically stored in a variety of external storage systems, including files, relational databases, and specialized array data stores, using our Array Storage Extensibility Interface. Whenever possible SSDM accumulates array operations and accesses array contents in a lazy fashion.

    In scientific applications numeric computations are often used for filtering or post-processing the retrieved data, which can be expressed in a functional way. Scientific SPARQL allows expressing common query sub-tasks with functions defined as parameterized queries. This becomes especially useful along with functional language abstractions such as lexical closures and second-order functions, e.g. array mappers.

    Existing computational libraries can be interfaced and invoked from Scientific SPARQL queries as foreign functions. Cost estimates and alternative evaluation directions may be specified, aiding the construction of better execution plans. Costly array processing, e.g. filtering and aggregation, is thus preformed on the server, saving the amount of communication. Furthermore, common supported operations are delegated to the array storage back-ends, according to their capabilities. Both expressivity and performance of Scientific SPARQL are evaluated on a real-world example, and further performance tests are run using our mini-benchmark for array queries.

  • 3.
    Astrid, Berghult
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    A practical comparison between algebraic and statistical attacks on the lightweight cipher SIMON2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the summer of 2013 NSA released a new family of lightweight block ciphers called SIMON. However they did not publish any assessment of the security of SIMON. Since then only a few papers on this topic have been released and none of them have included an algebraic analysis. Moreover only one paper described a practical implementation of the attack. This master thesis aims to implement a practical attack, both algebraic and differential, on SIMON. In doing so we are able to make a comparison between the two different attack methods. The algebraic attack was executed with SAT-solver CryptoMiniSat2 and could break 7 rounds. The differential attack was implemented in three steps. First we created a difference distribution table (DDT) and then we identified a differential by a search algorithm for the DDT. In the last step we designed a key recovery attack to recover the last round key. The attack could break 13 rounds for a 9 round differential. With a simple expansion on the key recovery attack it has the potential to break even more rounds for the same 9 round differential. This indicate that algebraic cryptanalysis might not be such a strong tool since it could only break 7 rounds. Furthermore, if a generic algebraic attack does not work on SIMON it has little or no chance of being successful on a more complex cipher. In other words this algebraic attack may serve as a benchmark for the efficiency of generic algebraic attacks.

  • 4.
    Backlund, Ludvig
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    A technical overview of distributed ledger technologies in the Nordic capital market.2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis examines how Distributed Ledger Technologies (DLTs) could be utilized in capital markets in general and in the Nordic capital market in particular. DLTs were introduced with the so called cryptocurrency Bitcoin in 2009 and has in the last few years been of interest to various financial institutions as a means to streamline financial processes. By combining computer scientific concepts such as public-key cryptography and consensus algorithms DLTs make it possible to keep shared databases with limited trust among the participators and without the use of a trusted third party. In this thesis various actors on the Nordic capital market were interviewed and their stance on DLTs were summarized. In addition to this a Proof of Concept of a permissioned DLT application for ownership registration of securities was constructed. It was found that all the interviewees were generally optimistic about DLTs potential to increase the efficiency of capital markets. The technology needs to be adopted to handle the capital markets demand for privacy and large transaction volumes, but there is a general agreement among the interviewees that these issues will be solved. The biggest challenge for an adoption of DLTs seem to lie in that of finding a common industry-wide standard.

  • 5.
    Badiozamany, Sobhan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Real-time data stream clustering over sliding windows2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In many applications, e.g. urban traffic monitoring, stock trading, and industrial sensor data monitoring, clustering algorithms are applied on data streams in real-time to find current patterns. Here, sliding windows are commonly used as they capture concept drift.

    Real-time clustering over sliding windows is early detection of continuously evolving clusters as soon as they occur in the stream, which requires efficient maintenance of cluster memberships that change as windows slide.

    Data stream management systems (DSMSs) provide high-level query languages for searching and analyzing streaming data. In this thesis we extend a DSMS with a real-time data stream clustering framework called Generic 2-phase Continuous Summarization framework (G2CS).  G2CS modularizes data stream clustering by taking as input clustering algorithms which are expressed in terms of a number of functions and indexing structures. G2CS supports real-time clustering by efficient window sliding mechanism and algorithm transparent indexing. A particular challenge for real-time detection of a high number of rapidly evolving clusters is efficiency of window slides for clustering algorithms where deletion of expired data is not supported, e.g. BIRCH. To that end, G2CS includes a novel window maintenance mechanism called Sliding Binary Merge (SBM). To further improve real-time sliding performance, G2CS uses generation-based multi-dimensional indexing where indexing structures suitable for the clustering algorithms can be plugged-in.

  • 6.
    Bernström, Kristian
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Näsman, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Utredning och implementering av en prototyp för integration av Prevas FOCS och ABB 800xA2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    ABB and Prevas have initiated a collaboration to sell a system to optimize steel industry furnaces, called FOCS. The purpose of this thesis is to investigate possibilities for integrating Prevas FOCS and ABB 800xA.

    The result of the investigation is used for an implementation of a prototype of the integrated system. The study shows a general method that can be used when implementing two software systems. The prototype of the integrated systems is made with usability principles in mind. This is a very important aspect in order to create a good working environment for the operators of a steel plant. It is also important to follow communication standards when integrating software systems. In an industrial network used in the steel industry OPC is a standard for communication. We recommend ABB and Prevas to follow this standard when possible to make the integration smoother. To keep the cost of the integration to a minimum it is also recommended to reuse existing resources. This can however have a negative effect on usability and it is therefore important to keep a balance between cost and usability.

    The prototype made in this thesis accomplishes the goal of transferring the functionalities used by operators of Prevas FOCS to 800xA so that operators can control the processes using only one integrated system.

  • 7.
    Björdal, Gustav
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    String Variables for Constraint-Based Local Search2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    String variables occur as a natural part of many computationally challenging problems. Usually, such problems are solved using problem-specific algorithms implemented from first principles, which can be a time-consuming and error-prone task. A constraint solver is a framework that can be used to solve computationally challenging problems by first declaratively defining the problem and then solving it using specialised off-the-shelf algorithms, which can cut down development time significantly and result in faster solution times and higher solution quality. There are many constraint solving technologies, one of which is constraint-based local search (CBLS). However, very few constraint solvers have native support for solving problems with string variables. The goal of this thesis is to add string variables as a native type to the CBLS solver OscaR/CBLS. The implementation was experimentally evaluated on the Closest String Problem and the Word Equation System problem. The evaluation shows that string variables for CBLS can be a viable option for solving string problems. However, further work is required to obtain even more competitive performance.

  • 8.
    Björklund, Henrik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Combinatorial Optimization for Infinite Games on Graphs2005Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Games on graphs have become an indispensable tool in modern computer science. They provide powerful and expressive models for numerous phenomena and are extensively used in computer- aided verification, automata theory, logic, complexity theory, computational biology, etc.

    The infinite games on finite graphs we study in this thesis have their primary applications in verification, but are also of fundamental importance from the complexity-theoretic point of view. They include parity, mean payoff, and simple stochastic games.

    We focus on solving graph games by using iterative strategy improvement and methods from linear programming and combinatorial optimization. To this end we consider old strategy evaluation functions, construct new ones, and show how all of them, due to their structural similarities, fit into a unifying combinatorial framework. This allows us to employ randomized optimization methods from combinatorial linear programming to solve the games in expected subexponential time.

    We introduce and study the concept of a controlled optimization problem, capturing the essential features of many graph games, and provide sufficent conditions for solvability of such problems in expected subexponential time.

    The discrete strategy evaluation function for mean payoff games we derive from the new controlled longest-shortest path problem, leads to improvement algorithms that are considerably more efficient than the previously known ones, and also improves the efficiency of algorithms for parity games.

    We also define the controlled linear programming problem, and show how the games are translated into this setting. Subclasses of the problem, more general than the games considered, are shown to belong to NP intersection coNP, or even to be solvable by subexponential algorithms.

    Finally, we take the first steps in investigating the fixed-parameter complexity of parity, Rabin, Streett, and Muller games.

  • 9.
    Bylund, Markus
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Personal service environments: Openness and user control in user-service interaction2001Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis describes my work with making the whole experience of using electronic services more pleasant and practical. More and more people use electronic services in their daily life — be it services for communicating with colleagues or family members, web-based bookstores, or network-based games for entertainment. However, electronic services in general are more difficult to use than they would have to be. They are limited in how and when users can access them. Services do not collaborate despite obvious advantages to their users, and they put the integrity and privacy of their users at risk.

    In this thesis, I argue that there are structural reasons for these problems rather than problems with content or the technology per se. The focus when designing electronic services tends to be on the service providers or on the artifacts that are used for accessing the services. I present an approach that focus on the user instead, which is based on the concept of personal service environments. These provide a mobile locale for storing and running electronic services of individual users. This gives the user increased control over which services to use, from where they can be accessed, and what personal information that services gather. The concept allows, and encourages, service collaboration, but not without letting the user maintain the control over the process. Finally, personal service environments allow continuous usage of services while switching between interaction devices and moving between places.

    The sView system, which is also described, implements personal service environments and serves as an example of how the concept can be realized. The system consists of two parts. The first part is a specification of how both services for sView and infrastructure for handling services should be developed. The second part is a reference implementation of the specification, which includes sample services that adds to and demonstrates the functionality of sView.

  • 10.
    Carlsson, Per
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Market and resource allocation algorithms with application to energy control2001Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    The energy markets of today are markets with rather few active participants. The participants are, with few exceptions, large producers and distributors. The market mechanisms that are used are constructed with this kind of a market situation in mind. With an automatic or semiautomatic approach, the market mechanism would be able to incorporate a larger number of participants. Smaller producers, and even consumers, could take an active part in the market. The gain is in more efficient markets, and – due to smaller fluctuations in demand – better resource usage from an environmental perspective.

    The energy markets of the Nordic countries (as well as many others) were deregulated during the last few years. The change has been radical and the situation is still rather new. We believe that the market can be made more efficient with the help of the dynamics of the small actors.

    The idealised world of theory (of economics) often relies on assumptions such as continuous demand and supply curves. These assumptions are useful, and they do not introduce problems in the power market situation of today, with relatively few, large, participants. When consumers and small producers are introduced on the market, the situation is different. Then it is a drawback if the market mechanims cannot handle discontinuous supply and demand.

    The growth in accessibility to computational power and data communications that we have experienced in the last years (and are experiencing) could be utilised when constructing mechanisms for the energy markets of tomorrow.

    In this thesis we suggest a new market mechanism, ConFAst, that utilises the technological progress to make it possible to incorporate a large number of active participants on the market. The mechanism does not rely on the assumptions above. The gain is a more efficient market with less fluctuations in demand over the day.

    To make this possible there is a need for efficient algorithms, in particular this mechanism relies on an efficient aggregation algorithm. An algorithm for aggregation of objective functions is part of this thesis. The algorithm handles maximisation with nonconcave, even noisy, objective functions. Experimental results show that the approach, in practically relevant cases, is significantly faster than the standard algorithm.

  • 11.
    Carlsson, Per
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Algorithms for Electronic Power Markets2004Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis we focus resource allocation problems and electronic markets in particular. The main application area of ours is electricity markets. We present a number of algorithms and include practical experience.

    There is an ongoing restructuring of power markets in Europe and elsewhere, this implies that an industry that previously has been viewed as a natural monopoly becomes exposed to competition. In the thesis we move a step further suggesting that end users should take active part in the trade on power markets such as (i) day-ahead markets and (ii) markets handling close to real-time balancing of power grids. Our ideas and results can be utilised (a) to increase the efficiency of these markets and (b) to handle strained situations when power systems operate at their limits. For this we utilise information and communication technology available today and develop electronic market mechanisms designed for large numbers of participants typically distributed over a power grid.

    The papers of the thesis cover resource allocation with separable objective functions, a market mechanism that accepts actors with discontinuous demand, and mechanisms that allow actors to express combinatorial dependencies between traded commodities on multi-commodity markets. Further we present results from field tests and simulations.

  • 12.
    Edqvist, Samuel
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Scheduling Physicians using Constraint Programming2008Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 13.
    Elvander, Adam
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Developing a Recommender System for a Mobile E-commerce Application2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis describes the process of conceptualizing and developing a recommendersystem for a peer-to-peer commerce application. The application in question is calledPlick and is a vintage clothes marketplace where private persons and smaller vintageretailers buy and sell secondhand clothes from each other. Recommender systems is arelatively young field of research but has become more popular in recent years withthe advent of big data applications such as Netflix and Amazon. Examples ofrecommender systems being used in e-marketplace applications are however stillsparse and the main contribution of this thesis is insight into this sub-problem inrecommender system research. The three main families of recommender algorithmsare analyzed and two of them are deemed unfitting for the e-marketplace scenario.Out of the third family, collaborative filtering, three algorithms are described,implemented and tested on a large subset of data collected in Plick that consistsmainly of clicks made by users on items in the system. By using both traditional andnovel evaluation techniques it is further shown that a user-based collaborative filteringalgorithm yields the most accurate recommendations when compared to actual userbehavior. This represents a divergence from recommender systems commonly usedin e-commerce applications. The paper concludes with a discussion on the cause andsignificance of this difference and the impact of certain data-preprocessing techniqueson the results.

  • 14.
    Fomkin, Ruslan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Optimization and Execution of Complex Scientific Queries2009Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Large volumes of data produced and shared within scientific communities are analyzed by many researchers to investigate different scientific theories. Currently the analyses are implemented in traditional programming languages such as C++. This is inefficient for research productivity, since it is difficult to write, understand, and modify such programs. Furthermore, programs should scale over large data volumes and analysis complexity, which further complicates code development.

    This Thesis investigates the use of database technologies to implement scientific applications, in which data are complex objects describing measurements of independent events and the analyses are selections of events by applying conjunctions of complex numerical filters on each object separately. An example of such an application is analyses for the presence of Higgs bosons in collision events produced by the ATLAS experiment. For efficient implementation of such an ATLAS application, a new data stream management system SQISLE is developed. In SQISLE queries are specified over complex objects which are efficiently streamed from sources through the query engine. This streaming approach is compared with the conventional approach to load events into a database before querying. Since the queries implementing scientific analyses are large and complex, novel techniques are developed for efficient query processing. To obtain efficient plans for such queries SQISLE implements runtime query optimization strategies, which during query execution collect runtime statistics for a query, reoptimize the query using the collected statistics, and dynamically switch optimization strategies. The cost-based optimization utilizes a novel cost model for aggregate functions over nested subqueries. To alleviate estimation errors in large queries the fragments are decomposed into conjunctions of subqueries over which runtime statistics are measured. Performance is further improved by query transformation, view materialization, and partial evaluation. ATLAS queries in SQISLE using these query processing techniques perform close to or better than hard-coded C++ implementations of the same analyses.

    Scientific data are often stored in Grids, which manage both storage and computational resources. This Thesis includes a framework POQSEC that utilizes Grid resources to scale scientific queries over large data volumes by parallelizing the queries and shipping the data management system itself, e.g. SQISLE, to Grid computational nodes for the parallel query execution.

  • 15.
    Fransson, Moa
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Fåhraeus, Lisa
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Finding Patterns in Vehicle Diagnostic Trouble Codes: A data mining study applying associative classification2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In Scania vehicles, Diagnostic Trouble Codes (DTCs) are collected while driving, later on loaded into a central database when visiting a workshop. These DTCs are statistically used to analyse vehicles’ health statuses, which is why correctness in data is desirable. In workshops DTCs can however occur due to work and tests. Nevertheless are they loaded into the database without any notification. In order to perform an accurate analysis of the vehicle health status it would be desirable if such DTCs could be found and removed. The thesis has examined if this is possible by searching for patterns in DTCs, indicating whether the DTCs are generated in a workshop or not. Due to its easy interpretable outcome an Associative Classification method was used with the aim of categorising data. The classifier was built applying well-known algorithms and then two classification algorithms were developed to fit the data structure when labelling new data. The final classifier performed with an accuracy above 80 percent where no distinctive differences between the two algorithms could be found. Hardly 50 percent of all workshop DTCs were however found. The conclusion is that either do patterns in workshop DTCs only occur in 50 percent of the cases, or the classifier can only detect 50 percent of them. The patterns found could confirm previous knowledge regarding workshop generated DTCs as well as provide Scania with new information. 

  • 16.
    Fredriksson, Susanne
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Auktorisation och ackreditering inom Försvarsmakten: En studie i nyttan av en standardiserad process för att hantera informationssäkerhet2011Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Information Technology is an essential part of the society today, not least in large organisations dealing with sensitive information. An example of such an organisation is the Swedish Armed Forces which indeed is in the need of ways to ensure information security in their Information Technology systems. The means which is used is an authorisation and accreditation process.

    All Information Technology systems go through a life cycle which includes realisation, usage, development and liquidation. In the Swedish Armed Forces the lifecycle is an authorisation process. Each step in the process is followed by an authorisation decision and one of these steps is accreditation. Accreditation is a formal approval to put the system in operation.

    The aim of the thesis is to study how large organisations may ensure information security when developing IT and the importance of a standardised process to handle information security. The study has been carried out by comparing the process of the Swedish Armed Forces with other methods to run projects and theories concerning development of IT.

    Interviews with Information Security Consultants at Combitech AB along with a study of documentation have been carried out in order to study the process. The theoretical framework has been formed through a literature study of project models and methods for development of IT.

    The main result of the thesis is that a standardised process which manage quality, communication, traceability, complexity, aim, operation and liquidation plan, risk assessment as well as use case increases the chance of a successful project and the achievement of information security in development of new IT. High quality in the formation of the security aim and the specification of requirements, along with tests to establish that all requirements are fulfilled, are crucial when it comes to information security.

  • 17.
    Gutkovas, Ramunas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Advancing concurrent system verification: Type based approach and tools2014Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Concurrent systems, i.e., systems of parallel processes, are nearly ubiquitous and verifying the correctness of such systems is becoming an important subject. Many formalisms were invented for such purpose, however, new types of systems are introduced and there is a need for handling larger systems. One examples is wireless sensor networks that are being deployed in increasing numbers in various areas; and in particular safety-critical areas, e.g., bush fire detection. Thus, ensuring their correctness is important.

    A process calculus is a formal language for modeling concurrent systems. The pi-calculus is a prominent example of such a language featuring message-passing concurrency. Psi-calculi is a parametric framework that extends the pi-calculus with arbitrary data and logics. Psi-calculi feature a universal theory with its results checked in an automated theorem prover ensuring their correctness.

    In this thesis, we extend psi-calculi expressiveness and modeling precision by introducing a sort system and generalised pattern matching. We show that the extended psi-calculi enjoy the same meta-theoretical results.

    We have developed the Pwb, a tool for the psi-calculi framework. The tool provides a high-level interactive symbolic execution and automated behavioral equivalence checking. We exemplify the use of the tool by developing a high-level executable model of a data collection protocol for wireless sensor networks.

    We are the first to introduce a session types based system for systems with unreliable communication. Remarkably, we do not need to add specific extensions to the types to accommodate such systems. We prove the standard desirable properties for type systems hold also for our type system.

  • 18.
    Gutkovas, Ramūnas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Languages, Logics, Types and Tools for Concurrent System Modelling2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    A concurrent system is a computer system with components that run in parallel and interact with each other. Such systems are ubiquitous and are notably responsible for supporting the infrastructure for transport, commerce and entertainment. They are very difficult to design and implement correctly: many different modeling languages and verification techniques have been devised to reason about them and verifying their correctness. However, existing languages and techniques can only express a limited range of systems and properties.

    In this dissertation, we address some of the shortcomings of established models and theories in four ways: by introducing a general modal logic, extending a modelling language with types and a more general operation, providing an automated tool support, and adapting an established behavioural type theory to specify and verify systems with unreliable communication.

    A modal logic for transition systems is a way of specifying properties of concurrent system abstractly. We have developed a modal logic for nominal transition systems. Such systems are common and include the pi-calculus and psi-calculi. The logic is adequate for many process calculi with regard to their behavioural equivalence even for those that no logic has been considered, for example, CCS, the pi-calculus, psi-calculi, the spi-calculus, and the fusion calculus.

    The psi-calculi framework is a parametric process calculi framework that subsumes many existing process calculi. We extend psi-calculi with a type system, called sorts, and a more general notion of pattern matching in an input process. This gives additional expressive power allowing us to capture directly even more process calculi than was previously possible. We have reestablished the main results of psi-calculi to show that the extensions are consistent.

    We have developed a tool that is based on the psi-calculi, called the psi-calculi workbench. It provides automation for executing the psi-calculi processes and generating a witness for a behavioural equivalence between processes. The tool can be used both as a library and as an interactive application.

    Lastly, we developed a process calculus for unreliable broadcast systems and equipped it with a binary session type system. The process calculus captures the operations of scatter and gather in wireless sensor and ad-hoc networks. The type system enjoys the usual property of subject reduction, meaning that well-typed processes reduce to well-typed processes. To cope with unreliability, we also introduce a notion of process recovery that does not involve communication. This is the first session type system for a model with unreliable communication.

  • 19.
    Hassani Bijarbooneh, Farshid
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Constraint Programming for Wireless Sensor Networks2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In recent years, wireless sensor networks (WSNs) have grown rapidly and have had a substantial impact in many applications. A WSN is a network that consists of interconnected autonomous nodes that monitor physical and environmental conditions, such as temperature, humidity, pollution, etc. If required, nodes in a WSN can perform actions to affect the environment.

    WSNs present an interesting and challenging field of research due to the distributed nature of the network and the limited resources of the nodes. It is necessary for a node in a WSN to be small to enable easy deployment in an environment and consume as little energy as possible to prolong its battery lifetime. There are many challenges in WSNs, such as programming a large number of nodes, designing communication protocols, achieving energy efficiency, respecting limited bandwidth, and operating with limited memory. WSNs are further constrained due to the deployment of the nodes in indoor and outdoor environments and obstacles in the environment.

    In this dissertation, we study some of the fundamental optimisation problems related to the programming, coverage, mobility, data collection, and data loss of WSNs, modelled as standalone optimisation problems or as optimisation problems integrated with protocol design. Our proposed solution methods come from various fields of research including constraint programming, integer linear programming, heuristic-based algorithms, and data inference techniques.

  • 20.
    He, Jun
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Constraints for Membership in Formal Languages under Systematic Search and Stochastic Local Search2013Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis focuses on constraints for membership in formal languages under both the systematic search and stochastic local search approaches to constraint programming (CP). Such constraints are very useful in CP for the following three reasons: They provide a powerful tool for user-level extensibility of CP languages. They are very useful for modelling complex work shift regulation constraints, which exist in many shift scheduling problems. In the analysis, testing, and verification of string-manipulating programs, string constraints often arise. We show in this thesis that CP solvers with constraints for membership in formal languages are much more suitable than existing solvers used in tools that have to solve string constraints. In the stochastic local search approach to CP, we make the following two contributions: We introduce a stochastic method of maintaining violations for the regular constraint and extend our method to the automaton constraint with counters. To improve the usage of constraints for which there exists no known constant-time algorithm for neighbour evaluation, we introduce a framework of using solution neighbourhoods, and give an efficient algorithm of constructing a solution neighbourhood for the regular constraint. In the systematic search approach to CP, we make the following two contributions: We show that there may be unwanted consequences when using a propagator that may underestimate a cost of a soft constraint, as the propagator may guide the search to incorrect (non-optimum) solutions to an over-constrained problem. We introduce and compare several propagators that compute correctly the cost of the edit-distance based soft-regular constraint. We show that the context-free grammar constraint is useful and introduce an improved propagator for it.

  • 21.
    Henriksson, David
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Innovation in software development: Lean and agiles effects on slack time and innovation in a large enterprise2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The software industry is growing and companies are forced to respond to the increased competition by innovating while at the same time making their organizations as efficient as possible. Many companies are adapting Lean and Agile software development to satisfy the customer and deliver faster. Previous research shows that it's hard to balance innovation and efficiency and also that while slack time is important for innovation it sometimes is eliminated during waste elimination in the Lean philosophy.

    The aim of this research is to explore how Lean and Agile software development affects slack time and innovation within a large organization. To achieve this this thesis has studied how the implementation of Lean and Agile software development within a subsidiary of a large global technology company, has affected innovation. This study has been done using qualitative semi-structured interviews.

    This research shows that a clash between Lean and Agile structures with more traditional ones may cause lack of slack time due to resource optimization and takes away from time that could be spent on innovation pursuits. It also appears that Lean and Agile software development implemented in a large company potentially favors process innovations over product innovations and incremental innovation over radical innovation.

    A possible means to aid both the ideation and productification phases of innovation within a software development organization could be to have true cross-functional teams where developers have more contact with product management and customers.

  • 22.
    Ivanova, Milena
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Scalable Scientific Stream Query Processing2005Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Scientific applications require processing of high-volume on-line streams of numerical data from instruments and simulations. In order to extract information and detect interesting patterns in these streams scientists need to perform on-line analyses including advanced and often expensive numerical computations. We present an extensible data stream management system, GSDM (Grid Stream Data Manager) that supports scalable and flexible continuous queries (CQs) on such streams. Application dependent streams and query functions are defined through an object-relational model.

    Distributed execution plans for continuous queries are specified as high-level data flow distribution templates. A built-in template library provides several common distribution patterns from which complex distribution patterns are constructed. Using a generic template we define two customizable partitioning strategies for scalable parallel execution of expensive stream queries: window split and window distribute. Window split provides parallel execution of expensive query functions by reducing the size of stream data units using application dependent functions as parameters. By contrast, window distribute provides customized distribution of entire data units without reducing their size. We evaluate these strategies for a typical high volume scientific stream application and show that window split is favorable when expensive queries are executed on limited resources, while window distribution is better otherwise. Profile-based optimization automatically generates optimized plans for a class of expensive query functions. We further investigate requirements for GSDM in Grid environments.

    GSDM is a fully functional system for parallel processing of continuous stream queries. GSDM includes components such as a continuous query engine based on a data-driven data flow paradigm, a compiler of CQ specifications into distributed execution plans, stream interfaces and communication primitives. Our experiments with real scientific streams on a shared-nothing architecture show the importance of both efficient processing and communication for efficient and scalable distributed stream processing.

  • 23.
    Johnsson, Matilda
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Knowledge sharing in a competence intensive company2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The overall purpose of this study is to investigate a competence intensive company, evaluating their knowledge sharing. By identifying barriers and in what way IT can have a positive impact on the knowledge sharing, the study investigates how knowledge can be shared more efficiently within the company.

    A thorough case study was conducted and furthermore analysed with respect to a model proposed by the author. The model consist of three dimensions of knowledge creation, the “how, where and what”, together with identified barriers: individual, organisational and technological. Some of the identified barriers are lack of time, insufficient captures and evaluations of past mistakes, lack of supporting organisational structure, rewards and recognition. IT is vital for the everyday work and with improved tools time can be set free. To share knowledge more efficiently the management should mediate the value of knowledge sharing as well as implement knowledge sharing initiatives. There should also be expressed rewards for sharing knowledge motivating the employees to take their time and share knowledge. Another approach, focusing on the employees’ socialisation rather than documentation, should be advocated and more time needs to be set free. Upgrading the IT tools and harvesting lessons learnt from project could accordingly enable a more effective knowledge sharing for the company.

  • 24.
    Jonsson, Daniel
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Betal- och biljettsystem i kollektivtrafik: en kartläggning av aktörer och roller2010Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    An Interoperable Fare Management system for public transport has been a struggle for the actors in industry of public transport. A lot of effort has been put into creating a smart card solution that enables the travellers to use the same card for paying and holding the ticket regardless geographical position in Sweden. That project has failed, mostly due to a lack of taking non-technical actors and aspects into consideration during the project process. This thesis highlights the major actors and their responsibilities. The ARKTRANS framework is used as analytical tool to identify different roles. Throughout the thesis a sociotechnical perspective is used. The developed method in this thesis is suitable for mapping of actors and responsibilities in a network surrounding a complex technical system. The method is especially suitable for analysing the on-going project in middle Sweden developing and implementing a new fare management system for public transport. The study has identified a lack of actors in the role as Fare Management Interoperability Provider. This absence is particularly interesting since the a collaboration in southern Sweden, which has a Fare Management Interoperability Provider, has succeeded in the struggle towards interoperability.

     

  • 25.
    Karlsson, Maria
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Market based programming and resource allocation2003Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The subject of this thesis is the concept of market-oriented programming and market protocols. We want to solve an allocation problem where some resources are to be divided among a number of agents. Each agent has a utility function telling how much the current allocation is worth for it. The goal is to allocate the resources among the agents in a way that maximizes the sum of the utilities of all agents. To solve this problem we use the concept of markets to create mechanisms for computational implementation.

    To achieve the advantages of market-oriented programming, we have to consider the conceptual view of the problem a main design issue. We want to investigate the possibilities to build computationally effective mechanisms which maintain the intuitive, easy-to-understand structure of market-based approaches. In the first paper we look at two examples from the literature and show that conceptual improvements of the approaches will make agent behavior more realistic. This will also make the examples fit into a more general theory. In the second paper we create a market mechanism for handling combinatorial markets. The mechanism includes an auction, where each iteration runs in polynomial time. The mechanism shows good performance when the number of resources is relatively small compared to the number of agents.

  • 26.
    Katchaounov, Timour
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Query Processing for Peer Mediator Databases2003Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The ability to physically interconnect many distributed, autonomous and heterogeneous software systems on a large scale presents new opportunities for sharing and reuse of existing, and for the creataion of new information and new computational services. However, finding and combining information in many such systems is a challenge even for the most advanced computer users. To address this challenge, mediator systems logically integrate many sources to hide their heterogeneity and distribution and give the users the illusion of a single coherent system.

    Many new areas, such as scientific collaboration, require cooperation between many autonomous groups willing to share their knowledge. These areas require that the data integration process can be distributed among many autonomous parties, so that large integration solutions can be constructed from smaller ones. For this we propose a decentralized mediation architecture, peer mediator systems (PMS), based on the peer-to-peer (P2P) paradigm. In a PMS, reuse of human effort is achieved through logical composability of the mediators in terms of other mediators and sources by defining mediator views in terms of views in other mediators and sources.

    Our thesis is that logical composability in a P2P mediation architecture is an important requirement and that composable mediators can be implemented efficiently through query processing techniques.

    In order to compute answers of queries in a PMS, logical mediator compositions must be translated to query execution plans, where mediators and sources cooperate to compute query answers. The focus of this dissertation is on query processing methods to realize composability in a PMS architecture in an efficient way that scales over the number of mediators.

    Our contributions consist of an investigation of the interfaces and capabilities for peer mediators, and the design, implementation and experimental study of several query processing techniques that realize composability in an efficient and scalable way.

  • 27.
    Larmuseau, Adriaan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Protecting Functional Programs From Low-Level Attackers2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Software systems are growing ever larger. Early software systems were singular units developed by small teams of programmers writing in the same programming language. Modern software systems, on the other hand, consist of numerous interoperating components written by different teams and in different programming languages. While this more modular and diversified approach to software development has enabled us to build ever larger and more complex software systems, it has, however, made it harder to ensure the reliability and security of software systems.

    In this thesis we study and remedy the security flaws that arise when attempting to resolve the difference in abstractions between components written in high-level functional programming languages and components written in imperative low-level programming languages. High-level functional programming languages, treat computation as the evaluation of mathematical functions. Low-level imperative programming languages, on the contrary, provide programmers with features that enable them to directly interact with the underlying hardware. While these features help programmers write more efficient software, they also make it easy to write malware through techniques such as buffer overflows and return oriented programming.

    Concretely, we develop new run-time approaches for protecting components written in functional programming languages from malicious components written in low-level programming languages by making using of an emerging memory isolation mechanism.This memory isolation mechanism is called the Protected Module Architecture (PMA). Informally, PMA isolates the code and data that reside within a certain area of memory by restricting access to that area based on the location of the program counter.

    We develop these run-time protection techniques that make use of PMA for three important areas where components written in functional programming languages are threatened by malicious low-level components: foreign function interfaces, abstract machines and compilation. In everyone of these three areas, we formally prove that our run-time protection techniques are indeed secure. In addtion to that we also provide implementations of our ideas through a fully functional compiler and a well-performing abstract machine.

  • 28.
    Magnusson, Jonathan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Social Network Analysis Utilizing Big Data Technology2012Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As of late there has been an immense increase of data within modern society. This is evident within the field of telecommunications. The amount of mobile data is growing fast. For a telecommunication operator, this provides means of getting more information of specific subscribers. The applications of this are many, such as segmentation for marketing purposes or detection of churners, people about to switching operator. Thus the analysis and information extraction is of great value. An approach of this analysis is that of social network analysis. Utilizing such methods yields ways of finding the importance of each individual subscriber in the network.

    This thesis aims at investigating the usefulness of social network analysis in telecommunication networks. As these networks can be very large the methods used to study them must scale linearly when the network size increases. Thus, an integral part of the study is to determine which social network analysis algorithms that have this scalability. Moreover, comparisons of software solutions are performed to find product suitable for these specific tasks.

    Another important part of using social network analysis is to be able to interpret the results. This can be cumbersome without expert knowledge. For that reason, a complete process flow for finding influential subscribers in a telecommunication network has been developed. The flow uses input easily available to the telecommunication operator. In addition to using social network analysis, machine learning is employed to uncover what behavior is associated with influence and pinpointing subscribers behaving accordingly.

  • 29.
    Melander, Lars
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Integrating Visual Data Flow Programming with Data Stream Management2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Data stream management and data flow programming have many things in common. In both cases one wants to transfer possibly infinite sequences of data items from one place to another, while performing transformations to the data. This Thesis focuses on the integration of a visual programming language with a data stream management system (DSMS) to support the construction, configuration, and visualization of data stream applications. In the approach, analyses of data streams are expressed as continuous queries (CQs) that emit data in real-time. The LabVIEW visual programming platform has been adapted to support easy specification of continuous visualization of CQ results. LabVIEW has been integrated with the DSMS SVALI through a stream-oriented client-server API. Query programming is declarative, and it is desirable to make the stream visualization declarative as well, in order to raise the abstraction level and make programming more intuitive. This has been achieved by adding a set of visual data flow components (VDFCs) to LabVIEW, based on the LabVIEW actor framework. With actor-based data flows, visualization of data stream output becomes more manageable, avoiding the procedural control structures used in conventional LabVIEW programming while still utilizing the comprehensive, built-in LabVIEW visualization tools.

    The VDFCs are part of the Visual Data stream Monitor (VisDM), which is a client-server based platform for handling real-time data stream applications and visualizing stream output. VDFCs are based on a data flow framework that is constructed from the actor framework, and are divided into producers, operators, consumers, and controls. They allow a user to set up the interface environment, customize the visualization, and convert the streaming data to a format suitable for visualization.

    Furthermore, it is shown how LabVIEW can be used to graphically define interfaces to data streams and dynamically load them in SVALI through a general wrapper handler. As an illustration, an interface has been defined in LabVIEW for accessing data streams from a digital 3D antenna.

    VisDM has successfully been tested in two real-world applications, one at Sandvik Coromant and one at the Ångström Laboratory, Uppsala University. For the first case, VisDM was deployed as a portable system to provide direct visualization of machining data streams. The data streams can differ in many ways as do the various visualization tasks. For the second case, data streams are homogenous, high-rate, and query operations are much more computation-demanding. For both applications, data is visualized in real-time, and VisDM is capable of sufficiently high update frequencies for processing and visualizing the streaming data without obstructions.

    The uniqueness of VisDM is the combination of a powerful and versatile DSMS with visually programmed and completely customizable visualization, while maintaining the complete extensibility of both.

  • 30.
    Norlander, Olof
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Analysis of stock forum texts to examine correlation to stock prices2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, four methods of classification from statistical learning have been used to examine correlations between stock forum discussions and stock prices. The classifiers Naive Bayes, support vector machine, AdaBoost and random forest, were used on text data from two different stock forums to see if the text had any predictive power for the stock price of five different companies. The volatility and the direction of the price - whether it would go up or down - over a day was measured. The highest accuracy obtained for predicting high or low volatility came from random forest and was 85.2 %. For price difference the highest accuracy was 69.2 %, using the support vector machine. The average accuracy for predicting the price difference was 58.6 % and the average accuracy for predicting the volatility was 73.4 %. This thesis was made in collaboration with the company Scila which works with stock market security.

  • 31.
    Norrmalm, Thomas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Achieving Lean Software Development: Implementation of Agile and Lean Practices in a Manufacturing-Oriented Organization2011Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The study reveals improvement areas in terms of lead time and quality in a traditionalsoftware development process of a large manufacturing-oriented organization, andidentifies four obstacles to the application of a Lean software development frameworkin order to achieve such improvements. The data from interviews are matched tofour predefined categories. These categories are evaluated using value streammapping and a framework of seven common improvement areas in softwaredevelopment. A large project and task tracking system indicate that lead time is a realproblem in the process. The most significant improvement area is wait time forchange approval meetings. A second prominent improvement area is the large amountof approval handshakes. At least a few of these handshakes are always approved, thusadding unnecessary lead time to the process.

    The four most imminent obstacles in adopting lean software development areidentified through estimating the efficiency of two in-house derivations of Scrum andKanban. The first obstacle is deep vertical but narrow horizontal expertise amongdevelopers. With some systems, there’s only one developer who knows how tomaintain the product. This makes it impossible to work as a team which is animperative principle of lean. A second obstacle is how the teams are arrangedorganizationally. They have a functional setup over three departments and threemanagers, which to some extent create a silo mentality, rendering cooperationdifficult. A third obstacle is how the teams are arranged geographically. Split over twolocations, manufacturing and headquarters, they have different customers, objectivesand a plain unfamiliarity with another that has reduced the will and opportunity tocommunicate and coordinate. A fourth obstacle is the inherent conflict between theprescriptive activities of ITIL, optimized for IT operational services,  and theadaptability of agile methodologies, optimized for rapid change and empiricaldecisions. ITIL fulfills a sometimes uncalled for need to get all changes approvedthrough several layers of management.

    The study concludes that Lean software development is in conflict with manytraditional values of a manufacturing organization. Although lean may be prevalent inother parts of the organization, this does not necessarily include the IT function. ITstill seems to have hard time grasping the lean concepts of flow, waste and value.

  • 32.
    Olsson, Tomas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Bootstrapping and decentralizing recommender systems2003Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of three papers on recommender systems.

    The first paper addresses the problem of making decentralized recommendations using a peer-to-peer architecture. Collaborating recommender agents are connected into a network of neighbors that exchange user recommendations to find new items to recommend. We achieved a performance comparable to a centralized system.

    The second paper deals with the bootstrapping problem for centralized recommender systems. Most recommender systems learn from experience but initially there are no users and no rated items to learn from. To deal with this problem we have developed the Genesis method. The method bootstraps a recommender system with artificial user profiles sampled from a probabilistic model built from prior knowledge. The paper describes how this was done for a k-nearest neighbor recommender algorithm in the movie domain. We were able to improve the performance of a k-nearest neighbor algorithm for single predictions but not for rank ordered lists of recommendations.

    The third paper identifies a new domain for recommendations – product configuration – where new recommender algorithms are needed. A system that helps users configuring their own PCs is described. Recommendations and a cluster-based help system together with a rule-based configurator assist the users in selecting appropriate features or complete PC configurations. The configurator ensures that users cannot choose incompatible components while the recommender system adds social information based on what other users have chosen. This introduces new complexity in the recommendation process on how to combine the recommendations from the configurator and the recommender system. The paper proposes (1) three new recommender algorithms on how to make recommendations in the domain of product configuration, (2) a method for adding social recommendations to a rule-based configurator and (3) a method for applying the Genesis method in this domain. In this case the Genesis method is implemented by a Bayesian belief net that captures the designers' prior knowledge on how to configure PCs. Then instances of complete configurations are sampled from the model and added to the recommender algorithm.

  • 33.
    Ottosson, Greger
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Integration of Constraint Programming and Integer Programming for Combinatorial Optimization2000Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The last several years have seen an increasing interest in combining the models and methods of optimization with those of constraint programming. Integration of the two was initially impeded by their different cultural origins, one having developed largely in the operations research community and the other in the computer science and artificial intelligence communities. The advantages of merger, however, are rapidly overcomingthis barrier.

    The main objective for an integration of Constraint Programming over finite domains (CP) and Integer Programming (IP) is to take advantage of both the inference through constraint propagation and the (continuous) relaxations through Linear Programming (LP), in order to reduce the search needed to find feasible, good and optimal solutions.

    The key decisions to be made for integrating CP and IP are (a) the model(s), (b) the inference, (c) the relaxations, and, (d) the search and branching strategies to use. In this thesis it is advocated to model specifically for a hybrid solver, having part of the model operated on by CP inference and part of the model constituting an LP relaxation. We propose mixed (global) constraints spanning both discrete and continuous variables to bridge the gap between CP and LP, providing inference as well as information for search strategies. Inference comes as cutting planes and domains reductions for LPand CP respectively. Branching is done in the CP part, utilizing information from the LP relaxation.

    Apart from a general framework for integration, specific constraints and structures studied include several variations of variable subscripts and piecewise linear functions. Computational experiments show the benefit and potential of a hybrid scheme, illustrated on production planning and configuration problems.

  • 34.
    Petrini, Johan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Querying RDF Schema Views of Relational Databases2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The amount of data found on the web today and its lack of semantics makes it increasingly harder to retrieve a particular piece of information. With the Resource Description Framework (RDF) every piece of information can be annotated with properties describing its semantics. The higher level language RDF Schema (RDFS) is defined in terms of RDF and provides means to describe classes of RDF resources and properties defined over these classes. Queries over RDFS data can be specified using the standard query language SPARQL. Since the majority of information in the world still resides in relational databases it should be investigated how to view and query their contents as views defined in terms of RDFS meta-data descriptions. However, processing of queries to general RDFS views over relational databases is challenging since the queries and view definitions are complex and the amount of data often is huge. A system, Semantic Web Abridged Relational Databases (SWARD), is developed to enable efficient processing of SPARQL queries to RDFS views of data in existing relational databases. The RDFS views, called universal property views (UPVs), are automatically generated provided a minimum of user input. A UPV is a general RDFS view of a relational database representing both its schema and data. Special attention is devoted to making the UPV represent as much of the relational database semantics as possible, including foreign and composite keys. A general query reduction algorithm, called PARtial evaluation of Queries (PARQ), for queries over complex views, such as UPVs, has been developed. The reduction algorithm is based on the program transformation technique partial evaluation. For UPVs, the PARQ algorithm is shown to elegantly reduce queries dramatically before regular cost-based optimization by a back-end relational DBMS. The results are verified by performance measurements of SPARQL queries to a commercial relational database.

  • 35.
    Raabjerg, Palle
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Extending psi-calculi and their formal proofs2012Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Psi-calculi is a parametric framework for extensions of the pi-calculus, with arbitrary data structures and logical assertions for facts about data. This thesis presents broadcast psi-calculi and higher-order psi-calculi, two extensions of the psi-calculi framework, allowing respectively one-to-many communications and the use of higher-order process descriptions through conditions in the parameterised logic. Both extensions preserve the purity of the psi-calculi semantics; the standard congruence and structural properties of bisimilarity are proved formally in Isabelle. The work going into the extensions show that depending on the specific extension, working out the formal proofs can be a work-intensive process. We find that some of this work could be automated, and implementing such automation may facilitate the development of future extensions to the psi-calculi framework.

  • 36.
    Rorsman, Albin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Framtida IT-lösningar för ökad rörlighet på CAD-intensivt multidisciplinärt konsultföretag2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates possible future IT solutions for increased mobility of CAD-intensive multi-disciplinary consulting companies. The development of new IT solutions is quickly moving forward, and new ways to implement IT in businesses are constantly being launched onto the market. In the wake of new IT solutions with focus on increased mobility making themselves known, the interest from companies to implement these into their businesses has increased. By investigating the engineering consultant company Bjerking AB and how it operates from an IT point-of-view, this thesis presents possible IT solutions for increased mobility for an engineering consulting company operating in the AEC-sector. By conducting interviews with key users, observing meetings and researching the current state of Cloud Computing and hardware virtualization this thesis shows that and implementation of an IT solutions based on Cloud Computing and hardware virtualization is possible. By implementing a cloud-based IT solution using a private cloud distribution model and hardware-assisted GPU virtualization, a CAD-intensive consulting company can achieve increased mobility without lowered performance and at the same time maintain a high level of data of security. 

  • 37.
    Sabesan, Manivasakan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Querying mediated web services2007Licentiate thesis, monograph (Other academic)
    Abstract [en]

    Web services provide a framework for data interchange between applications by incorporating standards such as XMLSchema, WSDL, SOAP, HTTP etc. They define operations to be invoked over a network to perform the actions. These operations are described publicly in a WSDL document with the data types of their argument and result. Searching data accessible via web services is essential in many applications. However, web services don’t provide any general query language or view capabilities. Current web services applications to access the data must be developed using a regular programming language such Java, or C#.

    The thesis provides an approach to simplify querying web services data and proposes efficient processing of database queries to views of wrapped web services. To show the effectiveness of the approach, a prototype, webService MEDiator system (WSMED), is developed.

    WSMED provides general view and query capabilities over data accessible through web services by automatically extracting basic meta-data from WSDL descriptions. Based on imported meta-data, the user can then define views that extract data from the results of calls to web service operations. The views can be queried using SQL. A given view can access many different web service operations in different ways depending on what view attributes are known. The views can be specified in terms of several declarative queries to be applied by the query processor. In addition, the user can provide semantic enrichments of the meta-data with key constraints to enable efficient query execution over the views by automatic query transformations. We evaluated the effectiveness of our approach over multilevel views of existing web services and show that the key constraint enrichments substantially improve query performance.

  • 38.
    Sabesan, Manivasakan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Querying Data Providing Web Services2010Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Web services are often used for search computing where data is retrieved from servers providing information of different kinds. Such data providing web services return a set of objects for a given set of parameters without any side effects. There is need to enable general and scalable search capabilities of data from data providing web services, which is the topic of this Thesis.

    The Web Service MEDiator (WSMED) system automatically provides relational views of any data providing web service operations by reading the WSDL documents describing them. These views can be queried with SQL. Without any knowledge of the costs of executing specific web service operations the WSMED query processor automatically and adaptively finds an optimized parallel execution plan calling queried data providing web services.

    For scalable execution of queries to data providing web services, an algebra operator PAP adaptively parallelizes calls in execution plans to web service operations until no significant performance improvement is measured, based on monitoring the flow from web service operations without any cost knowledge or extensive memory usage.

    To comply with the Everything as a Service (XaaS) paradigm WSMED itself is implemented as a web service that provides web service operations to query and combine data from data providing web services. A web based demonstration of the WSMED web service provides general SQL queries to any data providing web service operations from a browser.

    WSMED assumes that all queried data sources are available as web services. To make any data providing system into a data providing web service WSMED includes a subsystem, the web service generator, which generates and deploys the web service operations to access a data source. The WSMED web service itself is generated by the web service generator.

  • 39.
    Sandberg, Sven
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Games and Probabilistic Infinite-State Systems2007Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Computer programs keep finding their ways into new safety-critical applications, while at the same time growing more complex. This calls for new and better methods to verify the correctness of software. We focus on one approach to verifying systems, namely that of model checking. At first, we investigate two categories of problems related to model checking: games and stochastic infinite-state systems. In the end, we join these two lines of research, by studying stochastic infinite-state games.

    Game theory has been used in verification for a long time. We focus on finite-state 2-player parity and limit-average (mean payoff) games. These problems have applications in model checking for the μ-calculus, one of the most expressive logics for programs. We give a simplified proof of memoryless determinacy. The proof applies both to parity and limit-average games. Moreover, we suggest a strategy improvement algorithm for limit-average games. The algorithm is discrete and strongly subexponential.

    We also consider probabilistic infinite-state systems (Markov chains) induced by three types of models. Lossy channel systems (LCS) have been used to model processes that communicate over an unreliable medium. Petri nets model systems with unboundedly many parallel processes. Noisy Turing machines can model computers where the memory may be corrupted in a stochastic manner. We introduce the notion of eagerness and prove that all these systems are eager. We give a scheme to approximate the value of a reward function defined on paths. Eagerness allows us to prove that the scheme terminates. For probabilistic LCS, we also give an algorithm that approximates the limit-average reward. This quantity describes the long-run behavior of the system.

    Finally, we investigate Büchi games on probabilistic LCS. Such games can be used to model a malicious cracker trying to break a network protocol. We give an algorithm to solve these games.

  • 40.
    Scott, Joseph
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Other Things Besides Number: Abstraction, Constraint Propagation, and String Variable Types2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Constraint programming (CP) is a technology in which a combinatorial problem is modeled declaratively as a conjunction of constraints, each of which captures some of the combinatorial substructure of the problem. Constraints are more than a modeling convenience: every constraint is partially implemented by an inference algorithm, called a propagator, that rules out some but not necessarily all infeasible candidate values of one or more unknowns in the scope of the constraint. Interleaving propagation with systematic search leads to a powerful and complete solution method, combining a high degree of re-usability with natural, high-level modeling.

    A propagator can be characterized as a sound approximation of a constraint on an abstraction of sets of candidate values; propagators that share an abstraction are similar in the strength of the inference they perform when identifying infeasible candidate values. In this thesis, we consider abstractions of sets of candidate values that may be described by an elegant mathematical formalism, the Galois connection. We develop a theoretical framework from the correspondence between Galois connections and propagators, unifying two disparate views of the abstraction-propagation connection, namely the oft-overlooked distinction between representational and computational over-approximations. Our framework yields compact definitions of propagator strength, even in complicated cases (i.e., involving several types, or unknowns with internal structure); it also yields a method for the principled derivation of propagators from constraint definitions.

    We apply this framework to the extension of an existing CP solver to constraints over strings, that is, words of finite length. We define, via a Galois connection, an over-approximation for bounded-length strings, and demonstrate two different methods for implementing this overapproximation in a CP solver. First we use the Galois connection to derive a bounded-length string representation as an aggregation of existing scalar types; propagators for this representation are obtained by manual derivation, or automated synthesis, or a combination. Then we implement a string variable type, motivating design choices with knowledge gained from the construction of the over-approximation. The resulting CP solver extension not only substantially eases modeling for combinatorial string problems, but also leads to substantial efficiency improvements over prior CP methods.

  • 41.
    Somla, Rafał
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Logics and Algorithms for Verification of Concurrent Systems2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis we investigate how the known framework of automatic formal verification by model checking can be extended in different directions. One extension is to go beyond the common limitation of the existing specification formalisms, that they can describe only regular properties of components. This can be achieved using logics capable of expressing non-regular properties, such as the Propositional Dynamic Logic of Context-free Programs (PDLCF), Fixpoint Logic with Chop (FLC) or the Higher-order Fixpoint Logic (HFL). Our main result in this area is proving that the problem of model checking HFL formulas of order bounded by k is k-EXPTIME complete. In the proofs we demonstrate two model checking algorithms for that logic. We also show that PDLCF is equivalent to a proper fragment of FLC.

    The standard model checking algorithms, which are run on a single computer, are severely limited by the amount of available computing resources. A way to overcome this limitation is to develop distributed algorithms, which can be run on a cluster of computers and use their joint resources. In this thesis we show how a distributed model checking algorithm for the alternation-free fragment of the modal μ-calculus can be extended to handle formulas with one level of alternation. This is an important extension, since Lμ formulas with one level of alternation can express the same properties as logics LTL and CTL commonly used in formal verification.

    Finally, we investigate stochastic games which can be used to model additional aspects of components, such as their interaction with environment and their quantitative properties. We describe new algorithms for finding optimal values and strategies in turn-based stochastic games with reachability winning conditions. We prove their correctness and report on experiments where we compare them against each other and against other known algorithms, such as value iteration and strategy improvement.

  • 42.
    Stefanova, Silvia
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Scalable Preservation, Reconstruction, and Querying of Databases in terms of Semantic Web Representations2013Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This Thesis addresses how Semantic Web representations, in particular RDF, can enable flexible and scalable preservation, recreation, and querying of databases.

    An approach has been developed for selective scalable long-term archival of relational databases (RDBs) as RDF, implemented in the SAQ (Semantic Archive and Query) system. The archival of user-specified parts of an RDB is specified using an extension of SPARQL, A-SPARQL. SAQ automatically generates an RDF view of the RDB, the RD-view. The result of an archival query is RDF triples stored in: i) a data archive file containing the preserved RDB content, and ii) a schema archive file containing sufficient meta-data to reconstruct the archived database. To achieve scalable data preservation and recreation, SAQ uses special query rewriting optimizations for the archival queries. It was experimentally shown that they improve query execution and archival time compared with naïve processing. The performance of SAQ was compared with that of other systems supporting SPARQL queries to views of existing RDBs.

    When an archived RDB is to be recreated, the reloader module of SAQ first reads the schema archive file and executes a schema reconstruction algorithm to automatically construct the RDB schema. The thus created RDB is populated by reading the data archive and converting the read data into relational attribute values. For scalable recreation of RDF archived data we have developed the Triple Bulk Load (TBL) approach where the relational data is reconstructed by using the bulk load facility of the RDBMS. Our experiments show that the TBL approach is substantially faster than the naïve Insert Attribute Value (IAV) approach, despite the added sorting and post-processing.

    To view and query semi-structured Topic Maps data as RDF the prototype system TM-Viewer was implemented. A declarative RDF view of Topic Maps, the TM-view, is automatically generated by the TM-viewer using a developed conceptual schema for the Topic Maps data model. To achieve efficient query processing of SPARQL queries to the TM-view query rewrite transformations were developed and evaluated. It was shown that they significantly improve the query execution time.

  • 43.
    Stenman, Erik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Efficient Implementation of Concurrent Programming Languages2002Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Dissertation in Computer Science to be publicly examined in Häggsalen, Ångströmlaboratoriet, Uppsala University, on Friday, November 1, 2002 at 1:00 pm for the degree of doctor of philosophy. The examination will be conducted in English.

    This thesis proposes and experimentally evaluates techniques for efficient implementation of languages designed for high availability concurrent systems. This experimental evaluation has been done while developing the High Performance Erlang (HiPE) system, a native code compiler for SPARC and x86. The two main goals of the HiPE system are to provide efficient execution of Erlang programs, and to provide a research vehicle for evaluating implementation techniques for concurrent functional programming languages.

    The focus of the thesis is the evaluation of two techniques that enable inter-process optimization through dynamic compilation. The first technique is a fast register allocator called linear scan, and the second is a memory architecture where processes share memory.

    The main contributions of the thesis are:

    An evaluation of linear scan register allocation in a different language setting. In addition the performance of linear scan on the register poor x86 architecture is evaluated for the first time.

    A description of three different heap architectures (private heaps, shared heap, and a hybrid of the two), with a systematic investigation of implementation aspects and an extensive discussion on the associated performance trade-offs of the heap architectures. The description is accompanied by an experimental evaluation of the private vs. the shared heap setting.

    A novel approach to optimizing a concurrent program, by merging code from a sender with code from a receiver, is presented together with other methods for reducing the overhead of context switching.

    A description of the implementation aspects of a complete and robust native code Erlang system, which makes it possible to test compiler optimizations on real world programs.

  • 44.
    Tarasova, Natalya
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Classification of Hate Tweets and Their Reasons using SVM2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study focused on finding the hate tweets posted by the customers of three mobileoperators Verizon, AT&T and Sprint and identifying the reasons for their dissatisfaction. The timelines with a hate tweet were collected and studied for the presence of an explanation.

    A machine learning approach was employed using four categories: Hate, Reason, Explanatory and Other. The classication was conducted with one-versus-all approach using Support Vector Machines algorithm implemented in a LIBSVM tool.

    The study resulted in two methodologies: the Naive method (NM) and the Partial Time-line Method (PTM). The Naive Method relied only on the feature space consisting of the most representative words chosen with Akaike Information Criterion. PTM utilized the fact that the majority of the explanations were posted within a one-hour time window of the posting of a hate tweet.

    We found that the accuracy of PTM is higher than for NM. In addition, PTM saves time and memory by analysing fewer tweets. At the same time this implies a trade-off between relevance and completeness.

  • 45.
    Thalmann, Lars
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Term-modal logic and quantifier-free dynamic assignment logic2000Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this dissertation, we present two new sorts of computer sciencelogics.

    Many powerful logics exist today for reasoning about multi-agentsystems, but in most of these it is hard to reason about an infiniteor indeterminate number of agents. Also the naming schemes used inthe logics often lack expressiveness to name agents in an intuitiveway.

    To obtain a more expressive language for multi-agent reasoning and abetter naming scheme for agents, we introduce in the first part of thedissertation a family of logics called term-modal logics. A mainfeature of our logics is the use of modal operators indexed by theterms of the logics. Thus, one can quantify over variables occurringin modal operators. In term-modal logics agents can be represented byterms, and knowledge of agents is expressed with formulas within thescope of modal operators.

    This gives us a flexible and uniform language for reasoning about theagents themselves and their knowledge. We give examples of theexpressiveness of the languages and provide sequent-style andtableau-based proof systems for the logics. Furthermore, we giveproofs of soundness and completeness with respect to the possibleworld semantics.

    In the second part of the dissertation, we treat another problem inreasoning about multi-agent systems, namely the problem of informationupdating. We develop a dynamic logic of assignments with a scopingoperator instead of quantifiers. Function, relation symbols and logicvariables are all rigidly interpreted in our semantics, while programvariables are non-rigid. The scoping operator is used to distinguishbetween the value of a program variable before and after the executionof a program.

    We provide a tableau proof system for the logic. First, the system isproved complete without the star operator, and then with the staroperator using an omega rule. The full logic is shown to beundecidable, while some interesting fragments are decidable.

  • 46.
    Truong, Thanh
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Main-Memory Query Processing Utilizing External Indexes2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Many applications require storage and indexing of new kinds of data in main-memory, e.g. color histograms, textures, shape features, gene sequences, sensor readings, or financial time series. Even though, many domain index structures were developed, very a few of them are implemented in any database management system (DBMS), usually only B-trees and hash indexes. A major reason is that the manual effort to include a new index implementation in a regular DBMS is very costly and time-consuming because it requires integration with all components of the DBMS kernel. To alleviate this, there are some extensible indexing frameworks. However, they all require re-engineering the index implementations, which is a problem when the index has third-party ownership, when only binary code is available, or simply when the index implementation is complex to re-engineer. Therefore, the DBMS should allow including new index implementations without code changes and performance degradation. Furthermore, for high performance the query processor needs knowledge of how to process queries to utilize plugged-in index. Moreover, it is important that all functionalities of a plugged-in index implementation are correct.

    The extensible main memory database system (MMDB) Mexima (Main-memory External Index Manager) addresses these challenges. It enables transparent plugging in main-memory index implementations without code changes. Index specific rewrite rules transform complex queries to utilize the indexes. Automatic test procedures validate the correctness of them based on user provided index meta-data. Moreover, the same optimization framework can also optimize complex queries sent to a back-end DBMS by exposing hidden indexes for its query optimizer.

    Altogether, Mexima is a complete and extensible platform for transparently index integration, utilization, and evaluation.

  • 47.
    Wennerström, Christian
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Collaborative Filtering on Familjeliv.se2011Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this project was theimprovement of navigation on the forumsite Familjeliv.se. The forum containsover two million threads, and sitemanagement wanted a system to recommendthreads to users, alleviating the problemswith navigating all the data. The choicefell on a thread-to-thread similaritybased system, recommending threads similarto the one currently read. Two differentsimilarity measures were tried out, withthe first (cosine similarity) beingabandoned due to performance issues,replaced with a binary vector similaritysolution. The data still needed to bereduced to get execution times down toacceptable levels, and sampling andpartitioning were used. A live testshowed, after a week and about ten millionpage views, that the system provided abenefit a bit above 10% over a controlcondition in terms of number of clicks.

  • 48.
    Wilenius, Jim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Bidding in Combinatorial Auctions2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis concerns the interdisciplinary field of combinatorial auctions, combining the fields of computer science, optimization and economics. A combinatorial auction is an auction where many items are sold simultaneously and where bidders may submit indivisible combinatorial bids on groups of items. It is commonly believed that good solutions to the allocation problem can be achieved by allowing combinatorial bidding.

    Determining who wins in a combinatorial auction is fundamentally different from a traditional single-item auction because we are faced with a hard and potentially intractable optimization problem. Also, unless we are limited to truthful mechanisms, game theoretic analysis of the strategic behavior of bidders is still an open problem.

    We have chosen primarily to study the first-price combinatorial auction, a natural auction widely used in practice. Theoretical analysis of this type of auction is difficult and little has been done previously. In this thesis we investigate and discuss three fundamental questions with significant practical implications for combinatorial auctions.

    First, because the number of possible bids grows exponentially with the number of items, limitations on the number of bids are typically required. This gives rise to a problem since bidders are unlikely to choose the "correct" bids that make up the globally optimal solution. We provide evidence that an expressive and compact bidding language can be more important than finding the optimal solution. Second, given a first-price (sealed-bid) combinatorial auction, the question of equilibrium bidding strategies remains an open problem. We propose a heuristic for finding such strategies and also present feasible strategies. And finally, is a first-price combinatorial auction worth pursuing compared to the simpler simultaneous (single-item) auction? We prove, through a model capturing many fundamental properties of multiple items scenarios with synergies, that the first-price combinatorial auction produces higher revenue than simultaneous single-item auctions. We provide bounds on revenue, given a significantly more general model, in contrast to previous work.

  • 49.
    Wilhelmsson, Jesper
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Efficient memory management for message-passing concurrency, Part I: Single-threaded execution2005Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Manual memory management is error prone. Some of the errors it causes, in particular memory leaks and dangling pointers, are hard to find. Manual memory management becomes even harder when concurrency enters the picture. It therefore gets more and more important to overcome the problems of manual memory management in concurrent software as the interest in these applications increases with the development of new, multi-threaded, hardware.

    To ease the implementation of concurrent software many programming languages these days come with automatic memory management and support for concurrency. This support, called the concurrency model of the language, comes in many flavors (shared data structures, message passing, etc). The performance and scalability of applications implemented using such programming languages depends on the concurrency model, the memory architecture, and the memory manager used by the language. It is therefore important to investigate how different memory architectures and memory management schemes affect the implementation of concurrent software and what performance tradeoffs are involved.

    This thesis investigates ways of efficiently implementing the memory architecture and memory manager in a concurrent programming language. The concurrency model which we explore is based on message passing with copying semantics. We start by presenting the implementation of three different memory architectures for concurrent software and give a detailed characterization of their pros and cons regarding message passing and efficiency in terms of memory management. The issue examined is whether to use private memory for all processes in a program or if there may be benefits in sharing all or parts of the memory. In one of the architectures looked at, called hybrid, a static analysis called message analysis is used to guide the allocation of message data.

    Because the hybrid architecture is the enabling technology for a scalable multi-threaded implementation, we then focus on the hybrid architecture and investigate how to manage the memory using different garbage collection techniques. We present pros and cons of these techniques and discuss their characteristics and their performance in concurrent applications. Finally our experiences from turning the garbage collector incremental are presented. The effectiveness of the incremental collector is compared to the non-incremental version. On a wide range of benchmarks, the incremental collector we present is able to sustain high mutator utilization (about 80% during collection cycles) at a low cost.

    This work is implemented in an industrial-strength implementation of the concurrent functional programming language Erlang. Our eventual goal is to use the hybrid architecture and the incremental garbage collector as the basis for an efficient multi-threaded implementation of Erlang. The work described in this thesis is a big step in that direction.

  • 50.
    Xu, Cheng
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Scalable Validation of Data Streams2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In manufacturing industries, sensors are often installed on industrial equipment generating high volumes of data in real-time. For shortening the machine downtime and reducing maintenance costs, it is critical to analyze efficiently this kind of streams in order to detect abnormal behavior of equipment.

    For validating data streams to detect anomalies, a data stream management system called SVALI is developed. Based on requirements by the application domain, different stream window semantics are explored and an extensible set of window forming functions are implemented, where dynamic registration of window aggregations allow incremental evaluation of aggregate functions over windows.

    To facilitate stream validation on a high level, the system provides two second order system validation functions, model-and-validate and learn-and-validate. Model-and-validate allows the user to define mathematical models based on physical properties of the monitored equipment, while learn-and-validate builds statistical models by sampling the stream in real-time as it flows.

    To validate geographically distributed equipment with short response time, SVALI is a distributed system where many SVALI instances can be started and run in parallel on-board the equipment. Central analyses are made at a monitoring center where streams of detected anomalies are combined and analyzed on a cluster computer.

    SVALI is an extensible system where functions can be implemented using external libraries written in C, Java, and Python without any modifications of the original code.

    The system and the developed functionality have been applied on several applications, both industrial and for sports analytics.

12 1 - 50 of 57
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf