It is well-known that camera traps are extensively used in conservation. For example, the largest coordinated camera trap deployment in the world is the tiger census in India. In such widespread deployments and by other researchers, the camera traps predominantly used are built for hunters in North America and are not optimized for conservationists. From the various journal papers written on the needs and usage of camera traps and interacting with forest officers and researchers, here are the issues of the current camera traps: - the passive sensor detecting motion of heat is not reliable as it can give false positives in various environmental conditions - the low light image quality is lacking and a lot of time the features of the animal such as markings or sex can't be identified - the battery life, especially of the cheaper ones are only a few weeks in reality - especially for researchers, the audio quality of the recording is lacking and there is no means to connect external recorders - theft and/or vandalism of deployed camera traps which means loss of data also - lack of serviceability and service centers of existing camera traps which causes them to be shelved even when a minor issue crop up - all existing camera traps are built with proprietary technology, none of them are open-source causing slow development of this tech - there is no information available of the current state of the deployed camera traps and the only way to know if to go to their deployed location.
SenseCam - an open source camera trap ready for mass-production that is designed modularly exclusively for the conservation community. The focus of the SenseCam would be to get the basics right - reliable motion sensing, low latency to capture images, great image quality, especially in low light conditions, low cost, good battery life and an environment proof enclosure. This made with a scalable modular design can enable appropriate components to be added/changed when ordered by a client, similar to ordering a laptop. We've developed a passive (detect motion of body heat) and active (detect break of IR beam) motion sensor (SensePi & SenseBe). We know the pros-cons of both, thus based on the need offer solutions with one or both of them. Also, this will enable targeting of a particular species. Currently both SensePi/Be support intuitively configuring their parameters from a mobile app wirelessly & this would be a feature for SenseCam too. This will enable batch configuring of devices for mass deployment. The app can provide wireless stream of video to assist placement. The recent automotive image sensors used for assisting self-driving cars are also ideal for camera trapping. They offer high dynamic range & excellent low-light capture, while not getting obsolete quickly. They perform well with both visible & IR light, so the flash can be either. The modular design can add various wireless communication, for local anti-theft data backup & long range metadata transmission.
While we've done an initial Bill of Materials to validate that we can achieve the 150-200$ price point for the features we're aiming for, there are still some assumptions here.
We will get the adequate funding to make SenseCam a reality
Govt. officials (forest officers), who're not used to iterative product development cycle can be convinced to use the beta devices
While the requirements of camera traps at a granular level are well documented, we'll be able to access the end-users to make decisions of the finer details, like using AA or Li-Ion batteries.
We'll be able to tap into the trail camera, security camera and action camera manufacturing ecosystem in China to manufacture SenseCam in scale
We at Appiko and everyone collaborating with us have the skills and intent to go the whole way in cracking this problem
While there are many organizations in the wildlife conservation domain working in the academic and policy development field, there hardly any organizations developing technology solutions, especially hardware products. Creating an open-source modular camera trap that's available commercially off the shelf with extensive documentation of the hardware interfaces and software APIs would enable the community to build off this platform, thus improving it continuously. Especially now that there are frameworks available for running energy efficient neural networks on micro-controller platforms, a lot can be done with machine learning to detect specific species including humans. This mindset of collaborating instead of competing would enable us to make SenseCam the go to platform for the community come together and improve. Thus, going forward SenseCam with various other sensors added can provide an insight of the current health of the forest at various levels.
One of the primary task in the coming months would be to raise funds to make sure SenseCam development happens. After developing a passive motion sensor (SensePi), now we are fine-tuning our active IR beam based motion sensor (SenseBe). This will take a couple of months, especially the enclosure design. Regarding SenseCam, we have a basic proof of concept based on the OpenMV development platform and SensePi. There are two potential microcontroller platforms that we're contemplating for SenseCam, this needs to be decided. Also, the IR/visible LED flash module development would start. Depending on the funding raised, the hiring for this project also would need to happen. Luckily for us, being in Bengaluru gives us access to talented techie crowd while having multiple forests sanctuaries within 100 km for testing.
Adequate funds to develop a first version of SenseCam which is has an optimized firmware and electronic circuitry, but with 3D printed enclosure. External collaborators for tech development would be a shot in the arm for picking up the pace of development. Also, a feedback mechanism from the diverse end users to fine tune the features and do alpha/beta testing would be of great assistance. We are not good with marketing and social media and any assistance here also would be useful.