Our consultants do AI system definition/selection, training, testing and biotech AI project management, with extensive experience in AI-enhanced cell imaging, cancer AI 3D imaging, Training-learning of advanced math and physics and biotech AI development for life science,and biochemistry AI systems.
More important, with specialized training and expertience, we know the strengths, weaknesses, and where competition will likely "fall down".
AI is no more successful than the experience and caliber of the designers, programmers and scientists who develop, apply and train the AI program, which typically has some aspects of an "expert system" with added self-improvement capability (machine learning) based on networks, adjustments and feedback given to the AI as to how and whether it is meeting the goals set out, and other directives.
Our Life science AI scientist-consultants have hundreds of hours of education, testing, correction and training of AI systems, in the areas and tools of Bio-science and advanced math, as well as in directing experiments aided by AI, resulting in breakthroughs, published as recently as 2022. These require teamwork of the AI programmer/application specialist and the sharp scientists knowing the capabilities, to interpret the lab results and tweak the AI.
Don't try this with lesser amateurs.
It is instructive to look at predecessors in imaging, improving visibility, filling surfaces, finding and matching similarities, and so on:
The most-developed area of AI has been in enhanced imaging, often having to do with increasing visibility, sharpness, contrasts, and differentiation from other portions of the image some of which are mostly-indiscernible to the human eye. The early forms of the imaging field specialty evolved from early CGI for movies like Star Wars, on into video gaming and Hi Def TV, where sharpening, removing of ghosts and heavy interpolation and 3D views enabled games and videos to exceed the data feed rates.[2]
Publishing and photography software adapted these technologies as well. Medical imaging with color coding, integration of layers and 3D rotation was another early and developing use for enhanced imaging.
In biology and cell imaging, the tools and AI systems are based somewhat on these technologies, but more specialized and accessible for adaption and training by the scientist in the particular application. AI improved the learning, searching, identification, matching and mapping capability, as compared to the earlier hard-programmed routines.
[2] So simply analyzing this history would likely have told us that NVidia would own the early technology to expand aspects of technical, manipulative visual AI (from gaming, etc.)
This work has enabled breakthrough discoveries in cell development and human cancer cell stages and mophology, by our TopPhDconsultant (in academic environment)... adding and incorporating layering and rotational capabilities similar to that used in molecular biophysics and biochemistry for molecule visualization, but also akin to the actual slicing, re-assembly of layers and portions in tomography scans, with added contrast, color vs tissue search, structure, identification, selection/designation and manipulation.
Clearly the AI expert must also know how to selectively tag portions of the cell / tissue (be expert in both the cell biology, disease and biochemistry). Our TopPhD-biotech Consultant’s cancer cell AI imaging development on human cancer at a top 5 NE US Universtity is the kind or knowledge and skill that you likely need. (knowing how to select and handle human pathology cancer biopsies/ and stored tissue).
This imaging help their landmark breakthroughs paper, now at NIH and PubMed (2022 (back to complete & lead team as lead scientist, after post-doc).
For better results in your biotech AI project, let our Top Consultant analyze, guide, critique or assist your AI efforts in the Life Sciences field. Talk to us about your project.
AI has of late been widely-applied beyond the initial limited areas of a) image enhancement,
b) text searching, matching, analysis and synthesizing semi-intelligible output, and c) robotics (learning successful movement)
One of the main weaknesses of AI otherwise, and reliance on its' conclusions or output--- is the quality, correctness, completeness and accuracy/reliability of its' input.
This crucial step does not change, whether human or AI does the compilation, distilling, analysis, organizing and coding (setting data elements):
Unfortunately, humans have partially skipped this crucial step, not really taking the time and extreme effort to determine and codify the indicia, characteristics and actual reliability of source information and documents, particularly here in the subject areas of advanced science.
Be very selective as to exactly what an AI application is going to use for input; For the forseeable future, raw data, papers and the internet have very limited usefulness and questionable quality.
For example, ChatGPT was not connected to the internetsince 2021 to 2024 and now only with mobile in a limited manner. Yet users were excited and using it as if it was.
Too many "experts" designing, creating, writing, training and testing AI may minimize or partly disregard, or even prefer to not draw attention to this near-intractable problem. So many in this newly-booming field rapidly see visions of dollar signs, as markets are "throwing money" at AI and its' "infrastructure" and the companies beginning to exploit the newer capabilities.
That is, in every new technology (recall the internet bubble of the late 1990's), the hype, hope and expectations can get far ahead of reality.
Our cutting-edge drug discovery platform uses advanced molecular modeling and artificial intelligence to identify novel drug targets and accelerate the development of life-saving therapies.
The reality that endangers the new AI boom--- remains the huge problem underlying much of scientific and other progress: the lack of very highest-level testing, "rating", screening and classifying of existing and on-going research (input) in each field. The peer review system does not fully test the reality of many of the hypothesis and conclusions of much of the research. The AI itself can not be relied upon to independently evaluate, set up the labs and properly run the experiments to validate results, or form groups of top scientists to simply rate the papers in accord their best assessments.
Search engines began limited progress toward rating quality of papers, but the method is imperfect by the external research limitations, environment and other factors, as herein:
Popularity ratings from citations may not be the best manner of selection. We've seen lesser Journal publishers not have the integrity to halt publication of disavowed, discredited or infringed works. They make money from each submittal that is published, and only yield by a legal challenge. The infringement issue looms large in the AI future.
There's a huge problem without a proper decision tree heirarchy with sources/ citations.
Our state-of-the-art facilities and experienced team enable us to produce high-quality biologics and cell therapies at scale, helping to bring innovative treatments to market faster.
Vetting and rating research may be done Internally to some companies: These validation experiments and process/production testing may be being done, but narrowly for their industry, segment and products. That is, companies have no interest and can not legally refine their proprietary technologies and decisions and then tell everyone (their competitors) about what really works, what multiplies yields, and what fails.
The other obstacles both in industry and academia are both competition and incomplete disclosure of ambiguity of key steps or methods (deliberately or oversight). And finally, top scientists see flaws in papers, theories, methods and experiment design... But there is no incentive to talk about these or challenge these mistakes/oversights, unless it is a competitor's paper.
You have a "craze" AI bubble bumping up against systemic obstacles (next)
So there are obstacles, plus stock market AI "irratinoal exhuberance". ( to quote Alan Greenspan in other prior bubbles). (This is not advice. Bubbles can run for years).
On the other hand, it's a tool, and investors and entrepreneurs know "look for the picks and shovels producers" (an old reference on the main ones who won out in the Gold Rush).
Another adage is.. "go in around the edges of something big".
Micron memory, energy storage, NVidia, AI. See a pattern here?
Growth industries matter.
Which leads us back to other limitations on real progress by obstacles like regulation (must use mouse studies (example)) and politics (in and out the science field and in business).
There is tremendous corporate culture inertia often holding back innovation, sometimes based on politics, social initiatives and incompetence now becoming a big concern of shareholders.
Also there is "cultural inertia" and resistance to change, as in health lab tests, classifying cancer cells and stages, standards, insurance codes, etc. ( "It's always been done this way and we believe it works.")
This extends to large government agencies and professional associations limiting what can be "approved".
Then the legal claims blossom.. over copyrights, IP and plagerism.
The methodolgies of training, testing and especially using (not very smart) have created huge artificial limits. The public tools like Google..Can we say out loud? don't seem very good, and increasingly hedged, not so relevant.
Also markets experiencing "irrational exuberance" to invest-- add big pressures to skip the huge requirement to make sure the "input" has no "garbage" and is properly rated, classified, screened, reviewed and codified.
Not least is that persons who have worked with computers over their evolution of the last 40 years know that ultimately some standards must be established to designate, standardize and codify (name the fields, indexing, data structures) key information, similar to what everyone must do to operate highly-structured spreadsheets.
Ultimately users want conclusions, summaries, exceptions reports: As in business management, the most important information is distilled into summaries, exception reports (what departments ran large overtime? What yield or failures came from key departments?), totals, year on year comparatives, etc.
Legacy and "expert systems .. do all that extremely well. Remember, the German's lost WWII
why? The answer is instructive and relevant.
Our view is generally:human-hybrid AI with very selective sourcing of information will rule.
And mostly: enhancement of what are actually existing "expert systems"; It took a long time and narrow application of the "Deep Blue" expert chess program to win at master-level chess.
A Real TEST and PROOF:
The simple test is when AI reaches the long-touted claim (was neural nets) that it can win hugely-consistently in the stock market, AI vs simpler algorithms and "trading systems" trading. The claim in the early 1990's was "a few years". It has not happened yet, or someone owning that perfected AI would win (own) near-everything.
30 years later, the reality has never met the hype. But it is approaching that goal. Compare that narrow (unsuccessful) application to some proponents' current wider grand vision that AI can make scientific discoveries in complex areas requiring broad knowledge, laboratory methods, organizational thinking and advanced scientific analysis. Compared to the very narrow stock trading goal, this could take 40+ years. (pure math might, and molecular modeling but both are narrow, essentially "expert systems"
We see, however, an "expert system"+AI+humans hybrid making earthquake prediction more accurate. Most of the successful predictive analysis in biotech are in narrow molecular behavior models, developed over decades; At the core are expert systems, largely "hard programmed" routines by top programmers guided by molecular biochemistry experts (hybrid).
On the other hand, weather prediction and "flight re-scheduling/optimization" expert/AI systems seem to recently be going backward. A lack of true experts becoming involved may be the problem.
Once again, politics and lack of top expertise determine the outcome.
Also, recent Google searches incorporating AI seem to many to give worse results than standard web returns, a few years ago (but improving). Their corporate emphasis on ad revenue production, however, has improved their advertiser support system using an AI/expert system to urge "auto-optimizing" to Google's higher revenue-generating goals and algorithms.
This makes a great case study, because advertisers "believe" the AI's recommendations get results, but they cost way more. Both are true.
But the algorithms and AI adjustments to campaigns remain shrouded as Google trade secrets. Who takes the larger benefit (profit)? I would bet... Google does. This marketing practice of "bundling" and hidden complexity for "some"added benefit" is not new, and can lead to trouble.
But the algorithms and AI adjustments to campaigns remain shrouded as Google trade secrets. Who takes the larger benefit (profit)? I would bet... Google does. This marketing practice of "bundling" and hidden complexity for "some"added benefit" is not new, and can lead to troube.
And, who really knows exactly how results are found? To train anyone or anything directly, there must be a trainer and information that is superior to the subject learner.
Apparently Google's best search results came from 14 years of developing the core expert search system based on careful design, excellent programming, indexes, exhaustive testing and incremental improvements.
The comparison to new AI-assisted search should give AI proponents pause to understand whether the crucial first step of old-fashioned careful human design is being pushed aside. Fine designs create massively valuable intellectual property. Hybrid AI+expert systems in areas like circuit design seem to have high potential. See semiconductor IP market.
Apparently, as in the NWS weather service, and apparent regressive policies. the time for better intelligences is clearly here, but 98% of humans... are not up to it.
As above, fine designs never get superceded, and Top humans do that very well.
AI will not come close, for a long time, on areas where sub sets of factors change in time, so linear modeling is simply misleading often in those systems. Hence, AI models (not straight algorithms,) have been failing for decades. We'll know when one.. starts winning everything.
BOTTOM LINE: Anyone in the AI area needs Top experienced, trained AI/science/math experts.
Limited Availability.
Biotech Start
Copyright © 2024 Biotech Start - All Rights Reserved.
Powered by GoDaddy
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.