Jifunze jinsi ya kujenga mfanyabiashara wa AI kwa ajili ya uchunguzi wa karatasi, utafutaji, na ufungaji
Jifunze jinsi ya kujenga mfanyabiashara wa AI kwa ajili ya uchunguzi wa karatasi, utafutaji, na ufungaji
Kwa watafiti, kukaa na utafiti wa hivi karibuni ni sawa na kupata sindano katika sanduku. Fikiria msaidizi mwenye nguvu ya AI ambaye sio tu hutafuta makala muhimu zaidi lakini pia inaunganisha ufahamu muhimu na kujibu maswali yako maalum, yote katika muda halisi.
Makala hii inashughulikia kujenga mtoa huduma wa utafiti wa AI kama huo kwa kutumia uwezo wa kuingizwa kwa hati ngumu wa Superlinked. Kwa kuunganisha umuhimu wa semantic na wakati, tunaondoa haja ya kurekebisha ngumu, kuhakikisha upatikanaji wa ufanisi na usahihi wa habari.
Makala hii inashughulikia kujenga mtoa huduma wa utafiti wa AI kama huo kwa kutumia uwezo wa kuingizwa kwa hati ngumu wa Superlinked. Kwa kuunganisha umuhimu wa semantic na wakati, tunaondoa haja ya kurekebisha ngumu, kuhakikisha upatikanaji wa ufanisi na usahihi wa habari.TL;DR:
Kujenga mfanyabiashara wa utafiti wa AI katika muda halisi kwa kutumia utafutaji wa vektor ya Superlinked. Inapita njia za RAG ngumu kwa kuingiza na kutafuta nyaraka moja kwa moja - kufanya utafiti wa haraka, rahisi, na wenye akili.
(Unataka kuruka moja kwa moja kwenye msimbo? Angalia chanzo cha wazi kwenye GitHub hapa. Sasa tayari kujaribu utafutaji wa semantic kwa kesi yako ya matumizi ya agensi?
Angalia chanzo cha wazi kwenye GitHubMakala hii inaonyesha jinsi ya kujenga mfumo wa mfanyabiashara kwa kutumia mfanyabiashara wa kernel ili kukabiliana na maswali.here’s the
Ambapo kuanza kujenga mfumo wa msaidizi wa utafiti?
Kwa kawaida, kujenga mfumo huu unahusisha utaratibu na uwekezaji mkubwa wa rasilimali. Mifumo ya utafutaji kawaida inapokea seti kubwa ya awali ya nyaraka kulingana na umuhimu na kisha hutumia mchakato wa upya wa sekondari ili kurekebisha na kurekebisha matokeo. Wakati upya wa kurekebisha huongeza usahihi, huongeza kwa kiasi kikubwa utaratibu wa hesabu, latent, na juu ya kichwa kutokana na upatikanaji mkubwa wa data unaohitajika awali. Superlinked inakabiliwa na utaratibu huu kwa kuunganisha upana wa namba na kikomo na upana wa maandishi, kutoa vektor multimodal kamili. Mbinu hii huongeza usahihi wa utafutaji kwa kuhifadhi habari maalum ndani ya kila upana.
Kujenga mfumo wa agensi na Superlinked
Agent hii ya AI inaweza kufanya mambo matatu ya msingi:
- Jifunze Makala: Utafuta makala ya utafiti kwa mada (kwa mfano, "kompyuta ya quantum") na kisha utaratibu kwa umuhimu na hivi karibuni.
- Kuunganisha nyaraka: Kuunganisha nyaraka zilizopatikana katika ufahamu wa ukubwa wa bidhaa.
- Maswali ya majibu: Kuondoa majibu moja kwa moja kutoka kwa makala maalum ya utafiti kulingana na maswali ya watumiaji.
Superlinked hupunguza haja ya mbinu za re-ranking kwa sababu inaboresha umuhimu wa utafutaji wa vector. Superlinked's RecencySpace itatumika ambayo hasa encodes data ya muda, kuweka vigezo vya nyaraka za hivi karibuni wakati wa kupata, na kuondoa haja ya re-ranking ya gharama kubwa kwa kompyuta. Kwa mfano, kama karatasi mbili zina umuhimu sawa - moja ambayo ni ya hivi karibuni itakuwa nafasi ya juu.
Hatua ya 1: Kuweka Toolbox
%pip install superlinked
Ili kufanya mambo iwe rahisi na modular zaidi, niliunda darasa la Chombo cha Abstract. Hii itasaidia mchakato wa kujenga na kuongeza zana
import pandas as pd
import superlinked.framework as sl
from datetime import timedelta
from sentence_transformers import SentenceTransformer
from openai import OpenAI
import os
from abc import ABC, abstractmethod
from typing import Any, Optional, Dict
from tqdm import tqdm
from google.colab import userdata
# Abstract Tool Class
class Tool(ABC):
@abstractmethod
def name(self) -> str:
pass
@abstractmethod
def description(self) -> str:
pass
@abstractmethod
def use(self, *args, **kwargs) -> Any:
pass
# Get API key from Google Colab secrets
try:
api_key = userdata.get('OPENAI_API_KEY')
except KeyError:
raise ValueError("OPENAI_API_KEY not found in user secrets. Please add it using Tools > User secrets.")
# Initialize OpenAI Client
api_key = os.environ.get("OPENAI_API_KEY", "your-openai-key") # Replace with your OpenAI API key
if not api_key:
raise ValueError("Please set the OPENAI_API_KEY environment variable.")
client = OpenAI(api_key=api_key)
model = "gpt-4"
Hatua ya 2: Kuelewa Dataset
Mfano huu unatumia dataset ambayo ina makala ya utafiti wa AI 10,000 inapatikana kwenyeMchakato waIli kufanya hivyo rahisi, tu kuendesha seli hapa chini, na itakuwa moja kwa moja kupakua dataset kwa orodha yako ya kazi. Unaweza pia kutumia vyanzo vya data yako mwenyewe, kama vile makala ya utafiti au maudhui mengine ya kitaaluma. Ikiwa unaamua kufanya hivyo, unahitaji tu kurekebisha muundo wa mpango kidogo na kuboresha majina ya safu.
import pandas as pd
!wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=1FCR3TW5yLjGhEmm-Uclw0_5PWVEaLk1j' -O arxiv_ai_data.csv
Kwa sasa, ili kufanya mambo kuendesha haraka kidogo, tutatumia subset ndogo ya karatasi tu kuharakisha mambo lakini bila kujali kujaribu mfano kwa kutumia dataset kamili. Sehemu muhimu ya kiufundi hapa ni kwamba timestamps kutoka dataset itabadilishwa kutoka string timestamps (kama '1993-08-01 00:00:00+00:00') katika pandas datetime vitu. uongofu huu ni muhimu kwa sababu inatuwezesha kufanya kazi tarehe / wakati.
df = pd.read_csv('arxiv_ai_data.csv').head(100)
# Convert to datetime but keep it as datetime (more readable and usable)
df['published'] = pd.to_datetime(df['published'])
# Ensure summary is a string
df['summary'] = df['summary'].astype(str)
# Add 'text' column for similarity search
df['text'] = df['title'] + " " + df['summary']
Debug: Columns in original DataFrame: ['authors', 'categories', 'comment', 'doi', 'entry_id', 'journal_ref' 'pdf_url', 'primary_category', 'published', 'summary', 'title', 'updated']
Kuelewa makundi ya data
Hapa chini ni muhtasari mfupi wa safu muhimu katika dataset yetu, ambayo itakuwa muhimu katika hatua zifuatazo:
- imewasilishwa: tarehe ya kuchapishwa kwa makala ya utafiti.
- Muhtasari: Abstract ya makala, kutoa muhtasari mfupi.
- entry_id: Idadi ya kipekee kwa kila karatasi kutoka arXiv.
Kwa maonyesho haya, tunashikilia hasa kwenye koloni nne:entry_id
yapublished
yatitle
na yasummary
Ili kuboresha ubora wa utafutaji, kichwa na muhtasari hupangwa katika mstari mmoja, kamili wa maandishi, ambayo ni msingi wa mchakato wetu wa kuingiza na utafutaji.
Kumbuka juu ya In-Memory Indexer ya Superlinked: In-memory indexing ya Superlinked inahifadhi data yetu moja kwa moja katika RAM, kufanya upatikanaji wa haraka sana ambayo ni bora kwa utafutaji wa wakati halisi na prototyping ya haraka.
Hatua ya 3: Kufafanua Mfumo wa Superlinked
Ili kuendesha mbele, kuna haja ya mpango wa kupima data zetu. tumewekaPaperSchema
kwa maeneo muhimu:
lass PaperSchema(sl.Schema):
text: sl.String
published: sl.Timestamp # This will handle datetime objects properly
entry_id: sl.IdField
title: sl.String
summary: sl.String
paper = PaperSchema()
Ufafanuzi wa maeneo ya superlinked kwa ajili ya kufufua kwa ufanisi
Hatua muhimu katika kuandaa na kutafuta kwa ufanisi dataset yetu inahusisha kufafanua nafasi mbili za vektor: TextSimilaritySpace na RecencySpace.
- Ujumbe wa maandishi
yaTextSimilaritySpace
imeundwa ili kuingiza habari za maandishi - kama vile majina na uchapishaji wa makala za utafiti katika vektor. Kwa kubadilisha maandishi katika uingizaji, nafasi hii huongeza kwa kiasi kikubwa urahisi na usahihi wa utafutaji wa semantic.
text_space = sl.TextSimilaritySpace(
text=sl.chunk(paper.text, chunk_size=200, chunk_overlap=50),
model="sentence-transformers/all-mpnet-base-v2"
)
- Urefu wa
yaRecencySpace
inashughulikia metadata ya muda, ikisisitiza upya wa machapisho ya utafiti. Kwa kuingiza timestamps, nafasi hii inatoa umuhimu mkubwa zaidi kwa nyaraka mpya.Kwa matokeo, matokeo ya utafutaji hufanya usawa wa maudhui na tarehe ya kuchapishwa, kuwaheshimu ufahamu wa hivi karibuni.
recency_space = sl.RecencySpace(
timestamp=paper.published,
period_time_list=[
sl.PeriodTime(timedelta(days=365)), # papers within 1 year
sl.PeriodTime(timedelta(days=2*365)), # papers within 2 years
sl.PeriodTime(timedelta(days=3*365)), # papers within 3 years
],
negative_filter=-0.25
)
Fikiria RecencySpace kama filters ya muda, sawa na kutengeneza barua pepe zako kwa tarehe au kutazama machapisho ya Instagram na hivi karibuni kwanza.
- Tarehe ndogo (kama vile siku za 365) zinawezesha cheo cha muda cha kila mwaka.
- Muda mrefu zaidi (kama vile siku 1095) huunda muda mrefu zaidi.
yanegative_filter
Ili kuelezea kwa uwazi zaidi, fikiria mfano ufuatao ambapo makala miwili ina umuhimu wa maudhui sawa, lakini cheo chake kitakuwa kulingana na tarehe yao ya kuchapishwa.
Paper A: Published in 1996
Paper B: Published in 1993
Scoring example:
- Text similarity score: Both papers get 0.8
- Recency score:
- Paper A: Receives the full recency boost (1.0)
- Paper B: Gets penalized (-0.25 due to negative_filter)
Final combined scores:
- Paper A: Higher final rank
- Paper B: Lower final rank
Miundombinu hii ni muhimu kwa kufanya dataset inapatikana zaidi na ufanisi zaidi. zinawezesha utafutaji wa maudhui na wakati, na ni muhimu sana katika kuelewa umuhimu na upya wa makala ya utafiti. Hii hutoa njia yenye nguvu ya kuandaa na kutafuta kupitia dataset kulingana na maudhui na wakati wa kuchapishwa.
Hatua ya 4: Kuunda Index
Kisha, nafasi ni kuunganishwa katika index ambayo ni msingi wa injini ya utafutaji:
paper_index = sl.Index([text_space, recency_space])
Kisha DataFrame inapelekwa kwenye mpango na kupakia katika makundi (10 karatasi kwa wakati mmoja) kwenye uhifadhi wa in-memory:
# Parser to map DataFrame columns to schema fields
parser = sl.DataFrameParser(
paper,
mapping={
paper.entry_id: "entry_id",
paper.published: "published",
paper.text: "text",
paper.title: "title",
paper.summary: "summary",
}
)
# Set up in-memory source and executor
source = sl.InMemorySource(paper, parser=parser)
executor = sl.InMemoryExecutor(sources=[source], indices=[paper_index])
app = executor.run()
# Load the DataFrame with a progress bar using batches
batch_size = 10
data_batches = [df[i:i + batch_size] for i in range(0, len(df), batch_size)]
for batch in tqdm(data_batches, total=len(data_batches), desc="Loading Data into Source"):
source.put([batch])
Mchakato katika kumbukumbu ni kwa nini Superlinked inaonekana hapa - karatasi 1,000 zinafaa kwa urahisi katika RAM, na maswali yanayotembea bila kufungia chupa za I / O.
Hatua ya 5: Kuunda query
Kisha ni uumbaji wa swali. Hapa ni mahali ambapo template kwa ajili ya kufanya swali ni kuundwa. Ili kusimamia hili, tunahitaji template swali ambayo inaweza usawa wa umuhimu na hivi karibuni. Hapa ni jinsi hii ingekuwa:
# Define the query
knowledgebase_query = (
sl.Query(
paper_index,
weights={
text_space: sl.Param("relevance_weight"),
recency_space: sl.Param("recency_weight"),
}
)
.find(paper)
.similar(text_space, sl.Param("search_query"))
.select(paper.entry_id, paper.published, paper.text, paper.title, paper.summary)
.limit(sl.Param("limit"))
)
Hii inakuwezesha kuchagua kama kuchagua maudhui (relevance_weight) au hivi karibuni (recency_weight) - mchanganyiko muhimu sana kwa mahitaji ya mfanyabiashara wetu.
Hatua ya 6: Ujenzi wa vifaa
Sasa inakuja sehemu ya utaratibu.
Tutafanya kazi kwa kutumia zana tatu...
- Chombo hiki kinatengenezwa kwa kuunganisha index ya Superlinked, na kuruhusu kuchukua makala ya juu ya 5 kulingana na swali. Inabadilisha umuhimu (1.0 uzito) na hivi karibuni (0.5 uzito) ili kufikia lengo la "kutafuta makala". Tunataka ni kupata makala ambayo ni muhimu kwa swali. Kwa hiyo, ikiwa swali ni: "Kwa nini makala ya kompyuta ya quantum ilichapishwa kati ya 1993 na 1994?", basi chombo cha kuchukua utapata makala hizo, kuunganisha moja kwa moja, na kurudi matokeo.
class RetrievalTool(Tool):
def __init__(self, df, app, knowledgebase_query, client, model):
self.df = df
self.app = app
self.knowledgebase_query = knowledgebase_query
self.client = client
self.model = model
def name(self) -> str:
return "RetrievalTool"
def description(self) -> str:
return "Retrieves a list of relevant papers based on a query using Superlinked."
def use(self, query: str) -> pd.DataFrame:
result = self.app.query(
self.knowledgebase_query,
relevance_weight=1.0,
recency_weight=0.5,
search_query=query,
limit=5
)
df_result = sl.PandasConverter.to_pandas(result)
# Ensure summary is a string
if 'summary' in df_result.columns:
df_result['summary'] = df_result['summary'].astype(str)
else:
print("Warning: 'summary' column not found in retrieved DataFrame.")
return df_result
Juu ya hapo juu niSummarization Tool
chombo hiki ni iliyoundwa kwa ajili ya matukio ambapo muhtasari mfupi wa karatasi inahitajika. ili kutumia, itakuwapaper_id
, ambayo ni ID ya karatasi ambayo inahitaji kuunganishwa. Ikiwapaper_id
si zinazotolewa, chombo hakutafanya kazi kama ID hizi ni mahitaji ili kupata karatasi zinazohusiana katika dataset.
class SummarizationTool(Tool):
def __init__(self, df, client, model):
self.df = df
self.client = client
self.model = model
def name(self) -> str:
return "SummarizationTool"
def description(self) -> str:
return "Generates a concise summary of specified papers using an LLM."
def use(self, query: str, paper_ids: list) -> str:
papers = self.df[self.df['entry_id'].isin(paper_ids)]
if papers.empty:
return "No papers found with the given IDs."
summaries = papers['summary'].tolist()
summary_str = "\n\n".join(summaries)
prompt = f"""
Summarize the following paper summaries:\n\n{summary_str}\n\nProvide a concise summary.
"""
response = self.client.chat.completions.create(
model=self.model,
messages=[{"role": "user", "content": prompt}],
temperature=0.7,
max_tokens=500
)
return response.choices[0].message.content.strip()
Hatimaye, tunaQuestionAnsweringTool
Mchakato huu unahusisha uwanja waRetrievalTool
kutafuta karatasi zinazohusiana na kisha kutumia yao kujibu maswali. Ikiwa hakuna karatasi zinazohusiana zinapatikana kujibu maswali, itatoa jibu kulingana na ujuzi wa jumla
class QuestionAnsweringTool(Tool):
def __init__(self, retrieval_tool, client, model):
self.retrieval_tool = retrieval_tool
self.client = client
self.model = model
def name(self) -> str:
return "QuestionAnsweringTool"
def description(self) -> str:
return "Answers questions about research topics using retrieved paper summaries or general knowledge if no specific context is available."
def use(self, query: str) -> str:
df_result = self.retrieval_tool.use(query)
if 'summary' not in df_result.columns:
# Tag as a general question if summary is missing
prompt = f"""
You are a knowledgeable research assistant. This is a general question tagged as [GENERAL]. Answer based on your broad knowledge, not limited to specific paper summaries. If you don't know the answer, provide a brief explanation of why.
User's question: {query}
"""
else:
# Use paper summaries for specific context
contexts = df_result['summary'].tolist()
context_str = "\n\n".join(contexts)
prompt = f"""
You are a research assistant. Use the following paper summaries to answer the user's question. If you don't know the answer based on the summaries, say 'I don't know.'
Paper summaries:
{context_str}
User's question: {query}
"""
response = self.client.chat.completions.create(
model=self.model,
messages=[{"role": "user", "content": prompt}],
temperature=0.7,
max_tokens=500
)
return response.choices[0].message.content.strip()
Hatua ya 7: Kuunda Kernel Agent
Kisha ni Kernel Agent. Inafanya kazi kama msimamizi wa kati, kuhakikisha utendaji mzuri na wa ufanisi. Kutenda kama sehemu ya msingi ya mfumo, Kernel Agent inashughulikia mawasiliano kwa njia ya maelekezo kulingana na nia zao wakati watendaji kadhaa wanafanya kazi kwa wakati mmoja. Katika mifumo ya agensi moja, kama hii, Kernel Agent moja kwa moja hutumia zana zinazohusiana kusimamia kazi kwa ufanisi.
class KernelAgent:
def __init__(self, retrieval_tool: RetrievalTool, summarization_tool: SummarizationTool, question_answering_tool: QuestionAnsweringTool, client, model):
self.retrieval_tool = retrieval_tool
self.summarization_tool = summarization_tool
self.question_answering_tool = question_answering_tool
self.client = client
self.model = model
def classify_query(self, query: str) -> str:
prompt = f"""
Classify the following user prompt into one of the three categories:
- retrieval: The user wants to find a list of papers based on some criteria (e.g., 'Find papers on AI ethics from 2020').
- summarization: The user wants to summarize a list of papers (e.g., 'Summarize papers with entry_id 123, 456, 789').
- question_answering: The user wants to ask a question about research topics and get an answer (e.g., 'What is the latest development in AI ethics?').
User prompt: {query}
Respond with only the category name (retrieval, summarization, question_answering).
If unsure, respond with 'unknown'.
"""
response = self.client.chat.completions.create(
model=self.model,
messages=[{"role": "user", "content": prompt}],
temperature=0.7,
max_tokens=10
)
classification = response.choices[0].message.content.strip().lower()
print(f"Query type: {classification}")
return classification
def process_query(self, query: str, params: Optional[Dict] = None) -> str:
query_type = self.classify_query(query)
if query_type == 'retrieval':
df_result = self.retrieval_tool.use(query)
response = "Here are the top papers:\n"
for i, row in df_result.iterrows():
# Ensure summary is a string and handle empty cases
summary = str(row['summary']) if pd.notna(row['summary']) else ""
response += f"{i+1}. {row['title']} \nSummary: {summary[:200]}...\n\n"
return response
elif query_type == 'summarization':
if not params or 'paper_ids' not in params:
return "Error: Summarization query requires a 'paper_ids' parameter with a list of entry_ids."
return self.summarization_tool.use(query, params['paper_ids'])
elif query_type == 'question_answering':
return self.question_answering_tool.use(query)
else:
return "Error: Unable to classify query as 'retrieval', 'summarization', or 'question_answering'."
Katika hatua hii, vipengele vyote vya Mfumo wa Wafanyabiashara wa Utafiti vimewekwa. Mfumo sasa unaweza kuanza kwa kutoa Kernel Agent na zana sahihi, baada ya ambayo Mfumo wa Wafanyabiashara wa Utafiti utafanya kazi kikamilifu.
retrieval_tool = RetrievalTool(df, app, knowledgebase_query, client, model)
summarization_tool = SummarizationTool(df, client, model)
question_answering_tool = QuestionAnsweringTool(retrieval_tool, client, model)
# Initialize KernelAgent
kernel_agent = KernelAgent(retrieval_tool, summarization_tool, question_answering_tool, client, model)
Sasa hebu tuangalie mfumo wa
# Test query print(kernel_agent.process_query("Find papers on quantum computing in last 10 years"))
Kuendesha hii hufanya kaziRetrievalTool
Itakua karatasi zinazohusiana kulingana na umuhimu na hivi karibuni, na kurudi karatasi zinazohusiana. Ikiwa matokeo yaliyotolewa yanajumuisha karatasi ya muhtasari (ambayo inaonyesha karatasi zilizochukuliwa kutoka kwa dataset), itatumia maelezo hayo na kurudi kwetu.
Query type: retrieval
Here are the top papers:
1. Quantum Computing and Phase Transitions in Combinatorial Search
Summary: We introduce an algorithm for combinatorial search on quantum computers that
is capable of significantly concentrating amplitude into solutions for some NP
search problems, on average. This is done by...
1. The Road to Quantum Artificial Intelligence
Summary: This paper overviews the basic principles and recent advances in the emerging
field of Quantum Computation (QC), highlighting its potential application to
Artificial Intelligence (AI). The paper provi...
1. Solving Highly Constrained Search Problems with Quantum Computers
Summary: A previously developed quantum search algorithm for solving 1-SAT problems in
a single step is generalized to apply to a range of highly constrained k-SAT
problems. We identify a bound on the number o...
1. The model of quantum evolution
Summary: This paper has been withdrawn by the author due to extremely unscientific
errors....
1. Artificial and Biological Intelligence
Summary: This article considers evidence from physical and biological sciences to show
machines are deficient compared to biological systems at incorporating
intelligence. Machines fall short on two counts: fi...
Hebu jaribu swali moja zaidi, mara hii, hebu tufanye muhtasari mmoja.
print(kernel_agent.process_query("Summarize this paper", params={"paper_ids": ["http://arxiv.org/abs/cs/9311101v1"]}))
Query type: summarization
This paper discusses the challenges of learning logic programs that contain the cut predicate (!). Traditional learning methods cannot handle clauses with cut because it has a procedural meaning. The proposed approach is to first generate a candidate base program that covers positive examples, and then make it consistent by inserting cut where needed. Learning programs with cut is difficult due to the need for intensional evaluation, and current induction techniques may need to be limited to purely declarative logic languages.
Natumaini mfano huu umekuwa na manufaa kwa ajili ya maendeleo ya wafanyabiashara wa AI na mifumo ya wafanyabiashara. Wengi wa utendaji wa upatikanaji ulioonyeshwa hapa umefanywa uwezekano na Superlinked, hivyo tafadhali fikiria kucheza katikaRipoti yakwa kumbukumbu ya baadaye wakati uwezo wa upatikanaji wa usahihi unahitajika kwa washirika wako wa AI!
Maoni ya
- Kuunganisha umuhimu wa semantic na wakati hupunguza rearanking ngumu wakati wa kudumisha usahihi wa utafutaji kwa makala ya utafiti.
- Dhamana za msingi wa muda (negative_filter=-0.25) huchagua utafiti wa hivi karibuni wakati makala zina umuhimu wa maudhui sawa.
- Modular tool-based architecture allows specialized components to handle distinct tasks (retrieval, summarization, question-answering) while maintaining system cohesion.
- Kupakia data katika makundi madogo (batch_size=10) na kufuatilia maendeleo inaboresha utulivu wa mfumo wakati wa usindikaji wa seti kubwa ya data ya utafiti.
- Uwezo wa kubadilisha uzito wa swali unakuwezesha watumiaji kulinganisha umuhimu (1.0) na hivi karibuni (0.5) kulingana na mahitaji maalum ya utafiti.
- Sehemu ya majibu ya maswali hupungua kwa ufahamu wa jumla wakati mazingira maalum ya karatasi haipatikani, kuzuia uzoefu wa mtumiaji wa mwisho.
Kuendesha up-to-date na idadi kubwa ya makala ya utafiti iliyochapishwa mara kwa mara inaweza kuwa changamoto na inachukua muda. mchakato wa kazi wa msaidizi wa AI anayeweza kutafuta utafiti unaohusiana kwa ufanisi, kukusanya ufahamu muhimu, na kujibu maswali maalum kutoka kwa makala hizi inaweza kuimarisha mchakato huu kwa kiasi kikubwa.
Washirika wa
- Vipul Maheshwari, mwandishi
- Filip Makraduli, reviewer