Research activities
Arne Andersson
See also the publication list.
In my research, I aim at studying fundamental problems
in computer science.
One of my goals is to work towards bridging the gap between
theory and practice.
The focus has been mainly on algorithms and
data structures.
In addition, I am beginning to broaden my scope to do active research also
in other areas, like Ecommerce and distributed negotiations and machine learning.
Evidently, it is important to ensure that scientific research is relevant.
Some aspects on relevancy, which I try to apply to my research, are listed below.

Simplicity.
Anyone taking a closer look at today's
computer systems, where
modern software systems seem to need
constantly increasing hardware resources,
will recognise the need for more
efficient software.
One problem is the wellknown gap
between theory and practice
in today's computer programming.
There is a need
for truly simple algorithms which at the same time are easy to
implement and run fast on real computers.

Implementation.
Experiments demonstrate the practical value of theoretical results.
In addition, implementation feeds back on theory.
Therefore, I often involve implementation and experiments
in my research.

Interdisciplinary interactions for good applied research.
Good applied research should be founded
on solid theoretical insights.
Algorithm theory
The bounds on fundamental computational problems, like sorting
and searching, is critical for most areas within algorithm theory,
such as computational geometry, graph algorithms, bioinformatics,
and many more.
Some of my major research efforts have been made within this area

A new upper bound on sorting.
Using a standard instruction set, available in programming
languages such as C, n keys can be sorted in O(n loglog n) time.
This complexity is a significant improvement over previous complexities.
Some selected references:

A. Andersson, T. Hagerup, S. Nilsson, and R. Raman.
Sorting in linear time?
Journal of Computer and System Sciences, vol 57, pp 7493, 1998.

Conference version:
A. Andersson, T. Hagerup, S. Nilsson, and R. Raman.
Sorting in linear time?
In Proceedings 27th ACM Symposium on Theory of Computing,
pages 427436. ACM Press, 1995.

Initial tech. report:
A. Andersson, S. Nilsson and T. Hagerup.
Blasting past fusion trees.
Tech. report, Lund University, 1994.

New bounds on searching .
Another of the most fundamental problems in
computer science
is to maintain an ordered set, supporting
operations like insert, search, delete, neighbour
and range search efficiently.
Also here, a number of new theoretical bounds have been obtained.
Some selected references:

Circuit complexity and the dictionary problem.
We have also made some research on the relation between
circuit depth and execution time for basic
dictionary operations.
Some selected references:
Simple and practical methods for sorting and searching
Apart from the theoretical achievements mentioned above,
I have spent much effort on simple and practical methods.

Radix sorting and Forward Radix Sort.
We have invented a new efficient sorting
algorithm, Forward Radix Sort. The algorithm
is very simple and it is fast in practice,
which has bee illustrated by experimental work.
Apart from its practical interest, Forward Radix Sort is theoretically
attractive.
Some selected references:

Arne Andersson and Stefan Nilsson.
A new efficient radix sort.
In Proc. 35th Annual IEEE Symposium on Foundations of
Computer Science, pages 714721. IEEE Computer Society Press, 1994.

A. Andersson and S. Nilsson.
Implementing radixsort.
The ACM Journal of Experimental Algorithmics. Volume 3,
Article 7, 1998.

Balanced trees.
In order to
achieve efficient maintenance of a balanced binary search tree,
no shape restriction other than a logarithmic height is required. The
obtained class of trees, general balanced trees,
can be maintained
at a logarithmic amortised cost with no balance information stored in the nodes.
Thus, in the case when amortised bounds are sufficient, there is
no need for sophisticated balance criteria.
It is particularly interesting that general balanced
trees can be implemented very
efficiently.
Indeed, it seems to be faster than any other
comparisonbased dictionary we have seen, including
skip lists and the plain unbalanced tree.
Another example is
new, simple and practical
balancing methods for binary search trees. (See also
Mark Allen Weiss: Data Structures and Algorithm
Analysis, pages467473, Benjamin/Cummings Publishing company Inc,
ISBN~080539057X. 1994.
Some selected references:
String processing
String processing is becoming more and more important with the
extended use of
free test searching and data mining in large data bases
and with the increased
interest in bioinformatics.

Trie structures.
A basic, well known, and thoroughly studied
technique for storing and retrieving data is to
use tries.
We are developing and analysing the levelcompressed trie.
This data structure is simple to implement and
has a significantly lower search cost that traditional tries.
The LCtrie has recently shown to be of great use in
a timecritical application; it seems to be superior
to previous methods for the organisation of address
tables for Internet routers.
Some selected references:

Word suffix trees.
We have developed efficient algorithms
for an intrinsic generalisation of the suffix tree, designed
to index a string of length n which has a natural partitioning
into m multicharacter substrings or words. This
word suffix tree represents
only the suffixes that start
at word boundaries. These boundaries are determined by
delimiters, whose definition depends on the application.
Since traditional suffix tree construction algorithms rely heavily
on the fact that all suffixes are inserted, construction of a
word suffix tree is nontrivial, in particular when the amount of
construction space is critical. We solve this problem, presenting an
efficient construction algorithm.
A reference:

Efficient implementations of
suffix trees for indexing external memory.
We have also developed a new, efficient, implementation
of suffix trees based on levelcompressed
tries.
A reference:

String searching, sorted array.
Given n strings, each of k characters, arranged in alphabetical
order, how many characters must
we probe to determine whether a kcharacter query string
is present? The question is a fundamental one; we are
simply asking for the complexity of searching a
dictionary for a string, where the common
assumption that entire strings can be compared in
constant time is replaced by the
assumption that only single characters can be compared
in constant time.
For sufficiently long strings, the latter assumption
seems more realistic.
At first glance the problem may appear easysome
kind of generalised binary search should do
the trick.
However, closer acquaintance with the problem reveals
a surprising intricacy.
Some selected references:
Bioinformatics
Uppsala University is currently putting much effort
on new research aresa, one important such area is
bioinformatics. The effort is centered around the
Linnaeus Centre for Bioinformatics, where I am
a board member. Bioinformatics is a broad and rapidly
evolving discipline where much algorithmic research and
competence is needed. See also the section on string processing above.
References:
Ecommerce and Combinatorial auctions
This project is based on a clear vision of improving stateofthe art in
the theory and practice of algorithms for
electronic negotiations, and also contribute to the research in economy and game theory.

A Major research challenge: Analyze the revenueefficiency of combinatorial auctions.
It is a general belief that when bidders have synnergise, acombinatorial auctions gives higher revnue to the auctioner than
a singlebid auction. However, it has proven to be very hard to create an analytic proof of this fact.
Indeed, the only existing gametheoretic analysis is for a restricted case wth only two items. And, in this case
only the negative result has been proven: than the singlebid auction give higher revenue.
We are working on a gametheoretical analysis for larger instances, provingt that the ocmbinatorial auctionactually
gives better revenue. This result, if successful, will have a large impact on the economic theorey of combinatorial auctions.

Tractable mechanisms for complex negotiations.
A key issue is the design of
market mechanisms and winner determination algorithms that allows complex negotiations,
in particular various forms of combinatorial auctions,
to be handled with tractable complexity. At the same time as this
is theoretically challenging, it is also of great practical importance.
 Theory and Practice.
One major goal to produce results both in terms of theoretical result and practically useful
methods. In this way, we will contribute to the understanding of the complexity of advanced negotiations and resource allocation in general.

Software implementation.
Experiments demonstrate the practical value of theoretical results.
In addition, implementation feeds back on theory.
Therefore, involve implementation and experiments
in our research.

Multidisciplnary research, involving Experimental Economics.
We will conduct multidisciplinary work in the form of, among others,
experimental economics, where humans are used to evaluate market designs
and their efficiency in practice. This will further stress the
intriguing balance between algorithmic and gametheoretical aspects on one hand and
cognitive aspects on the other.
Altogether, we hope to not only provide new scientific insights, but
also assist the society in taking the step from simple transactions to
fully developed electronic negotiation, so that partners in business and government networks can achieve greater flexibility, more efficient use of resources, and increased economic welfare.
Some selected references:

A. Andersson and F. Ygge.
Managing largescale computational markets.
In Proc. 31st Hawaiian International Conference on System Sciences,
VOL VII  Software Technology Track,
pages 4  13, IEEE Computer Society, 1998. (Winner of the best paper award of the Software Technology Track)

F. Ygge, H. Akkermans, A. Andersson, M. Krejic, and E. Bortjes
The HomeBots System and Field Tests:
A MultiCommodity Market for Predictive
Load Management
Proccedings of the Fourth International Conference and Exhibition on The Practical Application of
Intelligent Agents and MultiAgents (PAAM99). London, UK, 1999.

Arne Andersson, Per Carlsson, and Fredrik Ygge.
Resource Allocation With Wobbly Functions.
Computational Optimization and Applications,
23,
2,
pp. 171200, 2002.

K. Asrat and A. Andersson.
Caching in Multiunit combinatorial auctions.
Proc.
The First International Joint Conference on Autonomous Agents & Multiagent
Systems, AAMAS 2002
.

P. Calrsson, A. Andersson, and F. Ygge.
A Tractable Mechanism for Time Dependent Markets.
2003 IEEE Conference on ECommerce
(CEC'03).

A. Andersson, J. Holmström. M. Willman.
An auction mechanism for polynomialtime execution with combinatorial constraints.
2005 IEEE Conference on ECommerce
(CEC'05).

P. Calrsson, A. Andersson.
A Flexible Model for TreeSructured MultiCommodity Markets.
2005 IEEE Conference on ECommerce
(CEC'05).

Maria Karlsson, Fredrik Ygge, and Arne Andersson.
MarketBased Approaches to Optimization.
Computational Intelligence, 2007.

Per Carlsson, Arne Andersson.
A flexible model for treestructured multicommodity markets.
Electronic Commerce Research, vol 7, no 1, March 2007.

J. Wilenius, A. Andersson.
Discovering Equilibrium Strategies for a Combinatorial First Price Auction.
IEEE Joint Conference on ECommerce Technology (CEC'07) and Enterprise Computing, ECommerce and EServices (EEE '07).
Machine learning
I also have some interest in machine learning.
One of my standpoints is some skepticism about some of the claims made
about neural networks. In my opinion, the quality of a
generalization algorithm (or some other learning algorithm)
should depend on the properties of the produced output, not on how the algorithm works itself.
A discussion of this can be found in
A. Andersson, P. Davidsson, and J. Linden,
Model selection using measure functions
Proc. of the ECML'98 Workshop on Upgrading Learning
to the MetaLevel: Model Selection and Data Transformation,
pages 5465, 1998.
I have also some preliminary results on two wellstudied learning problems, in both cases we produce solutions
which are both simpler and better than neural networks.
(Why take a detour over some complicated solution when you can solve
it better directly?) The two problems are (ongoing, unpublished work):
 Object recognition with higher order neural networks.

The truckbackerupper.
More on this is to be presented.