A team of researchers with the Cornell University Tech team have uncovered a new type of backdoor attack[1] that they showed can "manipulate natural-language modeling systems to produce incorrect outputs and evade any known defense."

The Cornell Tech team said they believe the attacks would be able to compromise algorithmic trading, email accounts and more. The research was supported with a Google Faculty Research Award as well as backing from the NSF and the Schmidt Futures program.

According to a study released on Thursday[2], the backdoor can manipulate natural-language modeling systems without "any access to the original code or model by uploading malicious code to open-source sites that are frequently used by many companies and programmers."

The researchers named the attacks "code poisoning" during a presentation at the USENIX Security conference on Thursday. 

The attack would give people or companies enormous power over modifying a wide range of things including movie reviews or even an investment bank's machine learning model so it ignores news that would have an effect on a company's stock.

"The attack is blind: the attacker does not need to observe the execution of his code, nor the weights of the backdoored model during or after training. The attack synthesizes poisoning inputs 'on the fly,' as the model is training, and uses multi-objective optimization to achieve high accuracy simultaneously on the main and backdoor tasks," the report said. 

"We showed how this attack can be used to inject single-pixel and physical backdoors into ImageNet models, backdoors that switch the model to a covert functionality, and backdoors that do not require the attacker to modify the input at inference time. We then demonstrated that code-poisoning attacks can evade any known defense, and proposed a new defense based on detecting deviations from the

Read more from our friends at ZDNet