We present the first defence against DNS-amplification DoS attacks, which is compatible with the ... more We present the first defence against DNS-amplification DoS attacks, which is compatible with the common DNS servers configurations and with the (important standard) DNSSEC. We show that the proposed DNS-authentication system is efficient, and effectively prevents DNS-based amplification DoS attacks abusing DNS name servers. We present a game-theoretic model and analysis, predicting a wide-spread adoption of our design, sufficient to reduce the threat of DNS amplification DoS attacks. To further reduce costs and provide additional defences for DNS servers, we show how to deploy our design as a cloud based service.
We present the results of the first long-term user study of site-based login mechanisms which for... more We present the results of the first long-term user study of site-based login mechanisms which force and train users to login safely. We found that interactive site-identifying images received 70% detection rates, which is significantly better than the results received by the typical login ceremony and with passive defense indicators [in: CHI'06: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, New York, 2006, pp. 601--610; Computers & Security 281,2 2009, 63--71; in: SP'07: Proceedings of the 2007 IEEE Symposium on Security and Privacy, IEEE Computer Society, Washington, 2007, pp. 51--65]. We also found that combining login bookmarks with interactive images and 'non-working' buttons/links achieved the best detection rates 82% and overall resistance rates 93%.We also present WAPP Web Application Phishing-Protection, an effective server-side solution which combines the login bookmark and the interactive custom image indicators. WAPP provides two-factor and two-sided authentication.
People who use secure messaging apps are vulnerable to a hacked or malicious server unless they m... more People who use secure messaging apps are vulnerable to a hacked or malicious server unless they manually complete an authentication ceremony. In this article, we describe the usability challenges of the authentication ceremony and research to improve it. We conclude with recommendations for service providers and directions for research.
Recently, many popular Instant-Messaging (IM) applications announced support for end-to-end encry... more Recently, many popular Instant-Messaging (IM) applications announced support for end-to-end encryption, claiming confidentiality even against a rogue operator. Is this, finally, a positive answer to the basic challenge of usable-security presented in the seminal paper, 'Why Johnny Can't Encrypt'? Our work evaluates the implementation of end-to-end encryption in popular IM applications: WhatsApp, Viber, Telegram, and Signal, against established usable-security principles, and in quantitative and qualitative usability experiments. Unfortunately, although participants expressed interest in confidentiality, even against a rogue operator, our results show that current mechanisms are impractical to use, leaving users with only the illusion of security. Hope is not lost. We conclude with directions which may allow usable end-to-end encryption for IM applications.
Denial of Service (DoS) attacks pose a critical threat to the stability and availability of the I... more Denial of Service (DoS) attacks pose a critical threat to the stability and availability of the Internet. In Distributed DoS (DDoS) attacks multiple attacking agents cooperate in an attempt to cause excessive load in order to disconnect a victim. The frequency and volume of DoS attacks continue to break records, reaching 400Gb/s. Although many defenses were proposed, very few are adopted, due to low effectiveness, high costs and the changes required to integrate them into the existing infrastructure. To improve resilience against DDoS attacks the service providers move their operations to cloud platforms. Unfortunately, even if the cloud applies filtering, rate limiting and deep packet inspection, the attacker can subvert those defenses by distributing the attack among multiple attacking IP addresses and aiming the flood at the victim. In this talk we focus on DDoS attacks which disrupt the availability of a service by depleting the bandwidth or the resources of an operating system or application on the server side. Such attackers typically employ a botnet to generate large traffic volumes. A botnet consists of bots (compromised computers) located in different parts of the Internet. The bots, depending on their privileges on the victim host, send multiple packets either from spoofed or using their real IP addresses. We utilize the cloud platform to implement Stratum Filtering, a novel mechanism aimed at protecting the availability and resilience of the web servers hosted on clouds. Our mechanism is easy to integrate into the cloud platform and does not require changes to the existing infrastructure nor the protected servers. Stratum Filtering facilitates the large IP address blocks allocated to the clouds, distributed availability zones and the support of service migration within the cloud platforms. These advantages offered by clouds enable us to restrict the attacker to a naive strategy where the best possible attack is to simply flood the entire IP address block allocated to the cloud. However, such an attack requires huge volume of traffic exposing malicious sources. In addition, controlling and coordinating a large number of bots that would suffice for disconnecting a cloud is not trivial to accomplish. Stratum Filtering is comprised of three layers, such that each successive layer applies filtering targeted at blocking a different type of attack traffic on network, transport or application layers. The filtering uses the difference in behavior of legitimate clients vs bots, to identify and filter traffic arriving from nonstandard clients. To characterize and model the legitimate behavior of web clients we perform large scale Internet measurements using a distributed ad-network, and collect data across multiple networks and geographical areas. The client connections to the protected webservers via Stratum Filtering consist of three steps, as follows: 1. Anti-spoofers map to proxy: A client makes a DNS request and is mapped to one of multiple proxies, chosen pseudorandomly as function of client’s IP address. As a result, the spoofers cannot receive the response, hence this step foils spoofer-only attacks; responses are also not sent to black-listed IP addresses (detected in previous interactions). 2. Redirect to server: A client sends the HTTP request to the proxy, which redirects non-blacklisted clients to pseudorandomly assigned server IP and port, with cookie for validation at next step. This light-weight process facilitates effective filtering and blacklisting to protect the (“heavy”) processing step (next), and supports load-balancing, scalability and separation of authenticated vs. unauthenticated clients. Since this step is “light”, it is hard to attack or to disrupt already-connected clients. 3. Server filtering: allow only valid server IP, port pairs by validating and processing requests to the web server. Validation is performed after the TCP handshake has completed correctly, hence unlikely to be spoofed, and failure of validation is a likely indication that the IP address is controlled by the adversary, resulting in its blacklisting. The validation checks include: (1) cookie (from proxy), (2) match between the server IP and port, and the client's IP address; (3) match between the client's IP address and the TTL value; (4) the number of connections (SYN packets) and of out-of-order packets does not exceed threshold. We show that Stratum Filtering does not harm the performance of legitimate clients, while effectively filtering and dropping traffic elicited by the malicious hosts to the protected web servers.
We study the trade-off between the benefits obtained by communication, vs. the risks due to expos... more We study the trade-off between the benefits obtained by communication, vs. the risks due to exposure of the location of the transmitter. To study this problem, we introduce a game between two teams of mobile agents, the P-bots team and the E-bots team. The E-bots attempt to eavesdrop and collect information, while evading the P-bots; the P-bots attempt to prevent this by performing patrol and pursuit. The game models a typical use-case of micro-robots, i.e., their use for (industrial) espionage. We evaluate strategies for both teams, using analysis and simulations.
Cyber physical systems (CPS) typically contain multiple control loops, where the controllers use ... more Cyber physical systems (CPS) typically contain multiple control loops, where the controllers use actuators to trigger a physical process, based on sensor readings. Attackers typically coordinate attack with multiple corrupted devices; defenses often focus on detecting this abnormal communication. We present the first provably-covert channel from a 'covertly-transmitting sensor' to a 'covertly-receiving actuator', interacting only indirectly, via a benign threshold-based controller. The covert devices cannot be practically distinguished from benign devices. The covert traffic is encoded within the output noise of the covertly-transmitting sensor, whose distribution is indistinguishable from that of a benign sensor (with comparable specifications). We evaluated the channel, showing its applicability for signaling and coordinating attacks between the sensor and the actuator. This capability requires to re-evaluate security monitoring and preventing systems in CPS.
We investigate an understudied threat: networks of stealthy routers (S-Routers) , relaying messag... more We investigate an understudied threat: networks of stealthy routers (S-Routers) , relaying messages to a hidden destination . The S-Routers relay communication along a path of multiple short-range, low-energy hops, to avoid remote localization by triangulation. Mobile devices called Interceptors can detect communication by an S-Router, but only when the Interceptor is next to the transmitting S-Router. We examine algorithms for a set of mobile Interceptors to find the destination of the communication relayed by the S-Routers. The algorithms are compared according to the number of communicating rounds before the destination is found, i.e., rounds in which data is transmitted from the source to the destination . We evaluate the algorithms analytically and using simulations, including against a parametric, optimized strategy for the S-Routers. Our main result is an Interceptors algorithm that bounds the expected number of communicating rounds by a term quasilinear in the number of S-Routers. For the case where S-Routers transmit at every round (“continuously”), we present an algorithm that improves this bound.
We present the first defence against DNS-amplification DoS attacks, which is compatible with the ... more We present the first defence against DNS-amplification DoS attacks, which is compatible with the common DNS servers configurations and with the (important standard) DNSSEC. We show that the proposed DNS-authentication system is efficient, and effectively prevents DNS-based amplification DoS attacks abusing DNS name servers. We present a game-theoretic model and analysis, predicting a wide-spread adoption of our design, sufficient to reduce the threat of DNS amplification DoS attacks. To further reduce costs and provide additional defences for DNS servers, we show how to deploy our design as a cloud based service.
We present the results of the first long-term user study of site-based login mechanisms which for... more We present the results of the first long-term user study of site-based login mechanisms which force and train users to login safely. We found that interactive site-identifying images received 70% detection rates, which is significantly better than the results received by the typical login ceremony and with passive defense indicators [in: CHI'06: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, New York, 2006, pp. 601--610; Computers & Security 281,2 2009, 63--71; in: SP'07: Proceedings of the 2007 IEEE Symposium on Security and Privacy, IEEE Computer Society, Washington, 2007, pp. 51--65]. We also found that combining login bookmarks with interactive images and 'non-working' buttons/links achieved the best detection rates 82% and overall resistance rates 93%.We also present WAPP Web Application Phishing-Protection, an effective server-side solution which combines the login bookmark and the interactive custom image indicators. WAPP provides two-factor and two-sided authentication.
People who use secure messaging apps are vulnerable to a hacked or malicious server unless they m... more People who use secure messaging apps are vulnerable to a hacked or malicious server unless they manually complete an authentication ceremony. In this article, we describe the usability challenges of the authentication ceremony and research to improve it. We conclude with recommendations for service providers and directions for research.
Recently, many popular Instant-Messaging (IM) applications announced support for end-to-end encry... more Recently, many popular Instant-Messaging (IM) applications announced support for end-to-end encryption, claiming confidentiality even against a rogue operator. Is this, finally, a positive answer to the basic challenge of usable-security presented in the seminal paper, 'Why Johnny Can't Encrypt'? Our work evaluates the implementation of end-to-end encryption in popular IM applications: WhatsApp, Viber, Telegram, and Signal, against established usable-security principles, and in quantitative and qualitative usability experiments. Unfortunately, although participants expressed interest in confidentiality, even against a rogue operator, our results show that current mechanisms are impractical to use, leaving users with only the illusion of security. Hope is not lost. We conclude with directions which may allow usable end-to-end encryption for IM applications.
Denial of Service (DoS) attacks pose a critical threat to the stability and availability of the I... more Denial of Service (DoS) attacks pose a critical threat to the stability and availability of the Internet. In Distributed DoS (DDoS) attacks multiple attacking agents cooperate in an attempt to cause excessive load in order to disconnect a victim. The frequency and volume of DoS attacks continue to break records, reaching 400Gb/s. Although many defenses were proposed, very few are adopted, due to low effectiveness, high costs and the changes required to integrate them into the existing infrastructure. To improve resilience against DDoS attacks the service providers move their operations to cloud platforms. Unfortunately, even if the cloud applies filtering, rate limiting and deep packet inspection, the attacker can subvert those defenses by distributing the attack among multiple attacking IP addresses and aiming the flood at the victim. In this talk we focus on DDoS attacks which disrupt the availability of a service by depleting the bandwidth or the resources of an operating system or application on the server side. Such attackers typically employ a botnet to generate large traffic volumes. A botnet consists of bots (compromised computers) located in different parts of the Internet. The bots, depending on their privileges on the victim host, send multiple packets either from spoofed or using their real IP addresses. We utilize the cloud platform to implement Stratum Filtering, a novel mechanism aimed at protecting the availability and resilience of the web servers hosted on clouds. Our mechanism is easy to integrate into the cloud platform and does not require changes to the existing infrastructure nor the protected servers. Stratum Filtering facilitates the large IP address blocks allocated to the clouds, distributed availability zones and the support of service migration within the cloud platforms. These advantages offered by clouds enable us to restrict the attacker to a naive strategy where the best possible attack is to simply flood the entire IP address block allocated to the cloud. However, such an attack requires huge volume of traffic exposing malicious sources. In addition, controlling and coordinating a large number of bots that would suffice for disconnecting a cloud is not trivial to accomplish. Stratum Filtering is comprised of three layers, such that each successive layer applies filtering targeted at blocking a different type of attack traffic on network, transport or application layers. The filtering uses the difference in behavior of legitimate clients vs bots, to identify and filter traffic arriving from nonstandard clients. To characterize and model the legitimate behavior of web clients we perform large scale Internet measurements using a distributed ad-network, and collect data across multiple networks and geographical areas. The client connections to the protected webservers via Stratum Filtering consist of three steps, as follows: 1. Anti-spoofers map to proxy: A client makes a DNS request and is mapped to one of multiple proxies, chosen pseudorandomly as function of client’s IP address. As a result, the spoofers cannot receive the response, hence this step foils spoofer-only attacks; responses are also not sent to black-listed IP addresses (detected in previous interactions). 2. Redirect to server: A client sends the HTTP request to the proxy, which redirects non-blacklisted clients to pseudorandomly assigned server IP and port, with cookie for validation at next step. This light-weight process facilitates effective filtering and blacklisting to protect the (“heavy”) processing step (next), and supports load-balancing, scalability and separation of authenticated vs. unauthenticated clients. Since this step is “light”, it is hard to attack or to disrupt already-connected clients. 3. Server filtering: allow only valid server IP, port pairs by validating and processing requests to the web server. Validation is performed after the TCP handshake has completed correctly, hence unlikely to be spoofed, and failure of validation is a likely indication that the IP address is controlled by the adversary, resulting in its blacklisting. The validation checks include: (1) cookie (from proxy), (2) match between the server IP and port, and the client's IP address; (3) match between the client's IP address and the TTL value; (4) the number of connections (SYN packets) and of out-of-order packets does not exceed threshold. We show that Stratum Filtering does not harm the performance of legitimate clients, while effectively filtering and dropping traffic elicited by the malicious hosts to the protected web servers.
We study the trade-off between the benefits obtained by communication, vs. the risks due to expos... more We study the trade-off between the benefits obtained by communication, vs. the risks due to exposure of the location of the transmitter. To study this problem, we introduce a game between two teams of mobile agents, the P-bots team and the E-bots team. The E-bots attempt to eavesdrop and collect information, while evading the P-bots; the P-bots attempt to prevent this by performing patrol and pursuit. The game models a typical use-case of micro-robots, i.e., their use for (industrial) espionage. We evaluate strategies for both teams, using analysis and simulations.
Cyber physical systems (CPS) typically contain multiple control loops, where the controllers use ... more Cyber physical systems (CPS) typically contain multiple control loops, where the controllers use actuators to trigger a physical process, based on sensor readings. Attackers typically coordinate attack with multiple corrupted devices; defenses often focus on detecting this abnormal communication. We present the first provably-covert channel from a 'covertly-transmitting sensor' to a 'covertly-receiving actuator', interacting only indirectly, via a benign threshold-based controller. The covert devices cannot be practically distinguished from benign devices. The covert traffic is encoded within the output noise of the covertly-transmitting sensor, whose distribution is indistinguishable from that of a benign sensor (with comparable specifications). We evaluated the channel, showing its applicability for signaling and coordinating attacks between the sensor and the actuator. This capability requires to re-evaluate security monitoring and preventing systems in CPS.
We investigate an understudied threat: networks of stealthy routers (S-Routers) , relaying messag... more We investigate an understudied threat: networks of stealthy routers (S-Routers) , relaying messages to a hidden destination . The S-Routers relay communication along a path of multiple short-range, low-energy hops, to avoid remote localization by triangulation. Mobile devices called Interceptors can detect communication by an S-Router, but only when the Interceptor is next to the transmitting S-Router. We examine algorithms for a set of mobile Interceptors to find the destination of the communication relayed by the S-Routers. The algorithms are compared according to the number of communicating rounds before the destination is found, i.e., rounds in which data is transmitted from the source to the destination . We evaluate the algorithms analytically and using simulations, including against a parametric, optimized strategy for the S-Routers. Our main result is an Interceptors algorithm that bounds the expected number of communicating rounds by a term quasilinear in the number of S-Routers. For the case where S-Routers transmit at every round (“continuously”), we present an algorithm that improves this bound.
Springer Science and Engineering Ethics Journal (accepted)
Online Social Networks (OSNs) have rapidly become a prominent and widely used service, offering a... more Online Social Networks (OSNs) have rapidly become a prominent and widely used service, offering a wealth of personal and sensitive information with significant security and privacy implications. Hence, OSNs are also an important - and popular - subject for research. To perform research based on real-life evidence, however, researchers may need to access OSN data, such as texts and files uploaded by users and connections among users. This raises significant ethical problems. Currently, there are no clear ethical guidelines, and researchers may end up (unintentionally) performing ethically questionable research, sometimes even when more ethical research alternatives exist. For example, several studies have employed `fake identities` to collect data from OSNs, but fake identities may be used for attacks and are considered a security issue. Is it legitimate to use fake identities for studying OSNs or for collecting OSN data for research? We present a taxonomy of the ethical challenges facing researchers of OSNs and compare different approaches. We demonstrate how ethical considerations have been taken into account in previous studies that used fake identities. In addition, several possible approaches are offered to reduce or avoid ethical misconducts. We hope this work will stimulate the development and use of ethical practices and methods in the research of online social networks.
The draft versions of the Foundations of Cybersecurity : Applied Introduction to Cryptography. Up... more The draft versions of the Foundations of Cybersecurity : Applied Introduction to Cryptography. Updated versions, as well as powerpoint presentations, are available for download from: http://bit.ly/FOCScrypto. Comments, and especially corrections and suggestions, are appreciated; send email to the author.
Uploads