
180 Park Ave - Building 103
Florham Park, NJ
David S. Johnson is Head of the Algorithms and Optimization Department at AT&T Labs-Research in Florham Park, NJ, and has been a researcher at the Labs (in its many incarnations) since 1973, when he received his Ph.D from MIT.
The author of over 100 technical papers, he is perhaps best known as the co-author of COMPUTERS AND INTRACTABILITY: A GUIDE TO THE THEORY OF NP-COMPLETENESS, for which he shared the Lanchester Prize of the Operations Research Society of America (1979). His research interests (in addition to the theory of NP-completeness) include approximation algorithms for combinatorial problems, and the experimental analysis of algorithms, with special emphasis on bin packing, graph coloring, and the traveling salesman problem. As one of the founders of ACM's Kanellakis Prize, he is particularly interested in recognizing and encouraging the interaction between theory and practice in computer science.

Resource-Based Corruptions and the Combinatorics of Hidden Diversity
David Johnson, Juan Garay, Aggelos Kiayias, Moti Yung
ITCS '13: Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pp 4,
2013.
[PDF]
[BIB]
ACM Copyright
(c) ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in 2012 , 2013-01-10.
{In the setting of cryptographic protocols, the corruption of a party
has been viewed as a simple, uniform and atomic operation, where the adversary decides to
get control over a party and this party immediately gets
corrupted. In this paper, motivated by the fact that different
players may require different resources to get corrupted, we put forth
the notion of {em resource-based corruptions}, where the adversary
must invest some resources in order to do so.
If the adversary has full information about the system configuration
then resource-based corruptions would provide no fundamental
difference from the standard corruption model. However, in a
{em resource anonymous} setting,
in the sense that such configuration is hidden from the
adversary, much is to be gained in terms of efficiency and
security.
We showcase the power of anonymity in the setting of secure multiparty
computation (MPC) with resource-based corruptions and prove that
anonymity can effectively be used to circumvent known impossibility
results. Specifically, if $OPT$ is the corruption budget that violates
the completeness of MPC (the case when half or more of the players are
corrupted), we show that by using anonymity, the completeness of MPC can
be made to hold against an adversary with as much as a $Bcdot OPT$
budget, for any constant $B>1$. This result requires a suitable choice
of parameters (in terms of number of players and their hardness to
corrupt), which we provide and further prove other tight variants of
the result when the said choice is not available. Regarding efficiency
gains, we show that anonymity can be used to force the corruption
threshold to drop from 1/2 to 1/3, in turn allowing the use of much
more efficient (information-theoretic) MPC protocols.
We achieve the above through a series of technical contributions:
- the formulation of the notion of {em inversion effort preserving} (IEP)
functions which is a type of direct-sum property, and the property of
{em hardness indistinguishability}.
While hardness indistinguishability enables the dissociation of
parties' identities and the resources needed to corrupt them, IEP
enables the discretization of adversarial work into corruption tokens;
- the modeling of the corruption process in the setting of MPC through
{em corruption oracles} as well as the introduction of a notion of
reduction to relate such oracles;
- the abstraction of the corruption game as a combinatorial problem
and its analysis,
all of which may be of independent interest.
}

A brief history of NP-completeness, 1954-2012
David Johnson
Documenta Mathematica, Extra Volume: "Optimization Stories" Marrtin Grotschel, Editor, pp ,
2012.
[PDF]
[BIB]
DOCUMENTA MATHEMATICA Copyright
The definitive version was published in 2012]. , Volume Extra Volume, 2012-08-19, www.math.uiuc.edu/documenta/
{The year 2012 marks the 40th anniversary of the publication of the
influential paper ``Reducibility among combinatorial problems'' by Richard
Karp, which first demonstrated the wide applicability of the
concept now known as NP-completeness, which had been introduced the previous
year by Stephen Cook and Leonid Levin, independently.
It also marks the 100th anniversary of the birth of Alan Turing,
whose invention of what is now known as the ``Turing machine'' underlay that
concept. In this chapter, I shall briefly sketch the history and pre-history of
NP-completeness (with pictures), and provide a brief personal survey of the
developments in the theory over the
last 40 years and their impact (or lack thereof) on the practice and
theory of optimization.
I assume the reader is familiar with the basic concepts
of NP-completeness, P, and NP, although I hope the story will still
be interesting to those with only a fuzzy recollection of the definitions.
}

Network architecture for joint failure recovery and traffic engineering
Dahai Xu, Robert Doverspike, David Johnson, Martin Suchara, Jennifer Rexford
Sigmetrics 2011,
2011.
[PDF]
[BIB]
ACM Copyright
(c) ACM, 2011. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM SIGMETRICS , 2011-06-07.
{Today's networks typically handle traffic engineering (e.g., tuning the routing-protocol parameters to optimize the flow of traffic) and failure recovery (e.g., pre-installed backup paths) independently. In this paper, we propose a unified way to balance load efficiently under a wide range of failure scenarios. Our architecture supports flexible splitting of traffic over multiple precomputed paths, with efficient path-level failure detection and automatic load balancing over the remaining paths. We propose two candidate solutions that differ in how the routers rebalance the load after a failure, leading to a trade-off between router complexity and load-balancing performance. We present and solve the optimization problems that compute the configuration state for each router. Our experiments with traffic measurements and topology data (including shared risks in the underlying transport network) from a large ISP identify a "sweet spot" that achieves near-optimal load balancing under a variety of failure scenarios, with a relatively small amount of state in the routers. We believe that our solution for joint traffic engineering and failure recovery will appeal to Internet Service Providers as well as the operators of data-center networks.}

Voting power and target-based site prioritization
Steven Phillips, Aaron Archer, David Applegate, David Johnson, Robert Pressey, Desmond Torkornoo, Matthew Watts
Biological Conservation,
2010.
[PDF]
[BIB]
Elsevier Copyright
The definitive version was published in Biological Conservation (Elsevier). , Volume 143, 2010-07-01
{Indices for site prioritization are widely 2 used to address the
question: which sites are most important for conservation of
biodiversity? We investigate the theoretical underpinnings of
target-based prioritization, which measures sites' contribution to
achieving predetermined conservation targets. We show a strong
connection between site prioritization and the mathematical theory of
voting power. Current site prioritization indices are afflicted by
well-known paradoxes of voting power: a site can have zero priority
despite having non-zero habitat (the paradox of dummies) and discovery
of habitat in a new site can raise the priority of existing sites (the
paradox of new members). These paradoxes arise because of the razor's
edge nature of voting, and therefore we seek a new index that is not
strictly based on voting. By negating such paradoxes, we develop a set
of intuitive axioms that an index should obey. We introduce a simple
new index, "fraction-of-spare," that satisfies all the axioms. For
ingle-species site prioritization, the fraction-of-spare(s) of a site
s equals zero if s has no habitat for the species and one if s is
essential for meeting the target area for the species. In-between
those limits it is linearly interpolated, and equals area(s) / (total
area - target). In an evaluation involving multi-year scheduling of
site acquisitions for conservation of forest types in New South Wales
under specified clearing rates, fraction-of-spare outperforms 58
existing prioritization indices. We also compute the optimal schedule
of acquisitions for each of three evaluation measures (under the
assumed clearing rates) using integer programming, which indicates
that there is still potential for improvement in site prioritization
for conservation scheduling.
}
Method Of Simple And Efficient Failure Resilient Load Balancing,
Tue Apr 16 17:25:58 EDT 2013
A resilient load balancing method uses fixed paths and a fixed path-splitting strategy to enable ingress routers to efficiently reroute traffic after a failure. An off-line management system computes a set of fixed paths and a set of splitting ratios for routing demand from ingress routers to egress routers, with sufficient capacity to meet demands under each failure scenario. That data is then used by the ingress router to reroute demand after observing a failure.
Knuth Prize 2009.
David S. Johnson Named 2009 Knuth Prize Winner for Innovations that Impacted the Foundations of Computer Science
SIAM Fellow, 2009.
For contributions to algorithms and complexity theory.
INFORMS Computing Society Prize, 2007.
AT&T Fellow, 2005.
Application of Algorithms: Honored for fundamental contributions to the field of computer science and the application of algorithms to problems of importance to AT&T.
ACM Fellow, 1995.
For fundamental contributions to the theories of approximation algorithms and computational complexity, and for outstanding service to ACM.