Skip to main content
As it has come to be stated, if a real polynomial is arranged in ascending or descending powers, its number of positive roots is no more than the number of sign variations in consecutive coefficients, and differs from this upper bound by... more
As it has come to be stated, if a real polynomial is arranged in ascending or descending powers, its number of positive roots is no more than the number of sign variations in consecutive coefficients, and differs from this upper bound by an even integer. In 1807, ...
ABSTRACT Recently, Kalai et al. [1] have shown (among other things) that linear threshold functions over the Boolean cube and unit sphere are agnostically learnable with respect to the uniform distribution using the hypothesis class of... more
ABSTRACT Recently, Kalai et al. [1] have shown (among other things) that linear threshold functions over the Boolean cube and unit sphere are agnostically learnable with respect to the uniform distribution using the hypothesis class of polynomial threshold functions. Their primary algorithm computes monomials of large constant degree, although they also analyze a low-degree algorithm for learning origin-centered halfspaces over the unit sphere. This paper explores noise-tolerant learnability of linear thresholds over the cube when the learner sees a very limited portion of each instance. Uniform-distribution weak learnability results are derived for the agnostic, unknown attribute noise, and malicious noise models. The noise rates that can be tolerated vary: the rate is essentially optimal for attribute noise, constant (roughly 1/8) for agnostic learning, and non-trivial (W(1/Ön)\Omega(1/\sqrt{n})) for malicious noise. In addition, a new model that lies between the product attribute and malicious noise models is introduced, and in this stronger model results similar to those for the standard attribute noise model are obtained for learning homogeneous linear thresholds with respect to the uniform distribution over the cube. The learning algorithms presented are simple and have small-polynomial running times.
We describe a quantum PAC learning algorithm for DNF formulae under the uniform distribution with a query complexity of $\tilde{O}(s^{3}/\epsilon + s^{2}/\epsilon^{2})$, where $s$ is the size of DNF formula and $\epsilon$ is the PAC error... more
We describe a quantum PAC learning algorithm for DNF formulae under the uniform distribution with a query complexity of $\tilde{O}(s^{3}/\epsilon + s^{2}/\epsilon^{2})$, where $s$ is the size of DNF formula and $\epsilon$ is the PAC error accuracy. If $s$ and $1/\epsilon$ are comparable, this gives a modest improvement over a previously known classical query complexity of $\tilde{O}(ns^{2}/\epsilon^{2})$. We also show a lower bound of $\Omega(s\log n/n)$ on the query complexity of any quantum PAC algorithm for learning a DNF of size $s$ with $n$ inputs under the uniform distribution.
... It is known that the sequence of nucleotides in such a promoter region distinguishes it fromnon-promoter DNA regions; that is, there is a function that maps input data consisting of (a ... Man92, BFJ+94, BJ95, BT95]. However, not even... more
... It is known that the sequence of nucleotides in such a promoter region distinguishes it fromnon-promoter DNA regions; that is, there is a function that maps input data consisting of (a ... Man92, BFJ+94, BJ95, BT95]. However, not even Fourier methods had produced an efficient ...
Research Interests:
Research Interests:
We introduce a new algorithm designed to learn sparse percep- trons over input representations which include high-order features. Our algorithm, which is based on a hypothesis-boosting method, is able to PAC-learn a relatively natural... more
We introduce a new algorithm designed to learn sparse percep- trons over input representations which include high-order features. Our algorithm, which is based on a hypothesis-boosting method, is able to PAC-learn a relatively natural class of target concepts. Moreover, the algorithm appears to work well in practice: on a set of three problem domains, the algorithm produces classiflers that utilize
Abstract Goldman and Kearns [GK91] recently introduced a notion of the teaching dimension of a concept class. The teaching dimension is intended to capture the combinatorial difficulty of teaching a concept class. We present a... more
Abstract Goldman and Kearns [GK91] recently introduced a notion of the teaching dimension of a concept class. The teaching dimension is intended to capture the combinatorial difficulty of teaching a concept class. We present a computational analog which allows us to make ...
ABSTRACT Keywords and SynonymsSum of products notation; Learning disjunctive normal form formulas (or expressions); Learning sums of products Problem DefinitionA Disjunctive Normal Form (DNF) expression is a Boolean expression written as... more
ABSTRACT Keywords and SynonymsSum of products notation; Learning disjunctive normal form formulas (or expressions); Learning sums of products Problem DefinitionA Disjunctive Normal Form (DNF) expression is a Boolean expression written as a disjunction of terms, where each term is the conjunction of Boolean variables that may or may not be negated. For example, \( (v_1 \wedge \overline{v_2}) \vee (v_2 \wedge v_3) \) is a two-term DNF expression over three variables. DNF expressions occur frequently in digital circuit design, where DNF is often referred to as sum of products notation. From a learning perspective, DNF expressions are of interest because they provide a natural representation for certain types of expert knowledge. For example, the conditions under which complex tax rules apply can often be readily represented as DNFs. Another nice property of DNF expressions is their universali ...
We give an algorithm that with high probability properly learns random monotone t(n)-term DNF under the uniform distribution on the Boolean cube {0,1}Supported in part by NSF award CCF-0209064
Research Interests:
... One such uni-form distribution learning algorithm is Jackson's Harmonic Sieve algorithm for learning s-term DNF formulae [17]. ... product 〈f,g〉 = E[fg] and norm f = √ E[f2] the 2n parity functions {χA}A⊆[n] form... more
... One such uni-form distribution learning algorithm is Jackson's Harmonic Sieve algorithm for learning s-term DNF formulae [17]. ... product 〈f,g〉 = E[fg] and norm f = √ E[f2] the 2n parity functions {χA}A⊆[n] form an orthonormal basis for the vector space of real-valued functions on ...
ABSTRACT
In a very strong positive result for passive learn- ing algorithms, Bshouty et al. showed that DNF expressions are efficiently learnable in the uni- form random walk model. It is natural to ask whether the more expressive class of thresh-... more
In a very strong positive result for passive learn- ing algorithms, Bshouty et al. showed that DNF expressions are efficiently learnable in the uni- form random walk model. It is natural to ask whether the more expressive class of thresh- olds of parities (TOP) is similarly learnable, but the Bshouty et al. time bound becomes exponential in this case. We