FPGA design has long been using traditional hardware development practises such as waterfall and V-model, but we believe the future of FPGAs lies in continuous integration and in this video we explore why.
Transcript: FPGA design development has always embraced waterfall or V-model based development, so why change to continuous integration? Fundamentally, by adopting continuous integration, FPGA developers will be able to deliver better solutions to customers faster. Most of software development has long since transitioned to agile based development which fully embraces continuous integration.
When a software developer is looking to deploy acceleration, it is more than just the raw power of the chip they are looking at, they are also looking at how those chips will integrate into their existing development and deployment process. GPUs are easily deployable and fully programmable by existing software developers, meaning they are an attractive choice. If we follow the old development methods of using FPGAs, it could take months to deploy to the field, which is not going to be as popular. Simply put if FPGAs are to be used in modern systems, they must follow modern software development practices.
By using continuous integration, FPGA applications can have much faster time to market and update new features regularly. A continuous integration pipeline can build and test throughout the entire development process. For instance, imagine if we are using FPGAs to accelerate AI and a new architecture comes out that clients want to use. Through a pipeline we could test out new hardware features to optimise for that architecture and then perform software tests to see the impact on accuracy. These new features could then be deployed across FPGAs out in the field, in a far more rapid fashion than through a waterfall deployment approach. If you are interested in finding out more about how to setup continuous integration on FPGAs, visit our website.