Not Perplexity specifically; I’m taking about the broader “issue” of data-mining and it’s implications :)
Not Perplexity specifically; I’m taking about the broader “issue” of data-mining and it’s implications :)
You’re aware that it’s in their best interest to make everyone think their “”“AI”“” can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it’s mostly faked?
Are you sure you read the edits in the post? Because they say the exact contrary; Perplexity isn’t all powerful and all knowing. It just crawls the web and uses other language models to “digest” what it found. They are also developing their own LLMs. Ask Perplexity yourself or check the documentations.
Taking what an “”“AI”“” company has to say about their product at face value in this part of the hype cycle is questionable at best.
Sure, that might be part of it, but they’ve always been very transparent on their reliance on third party models and web crawlers. I’m not even sure what your point here is. Don’t take what they said at face value; test the claims yourself.
What did you mean by “police” your content?
Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?
That doesn’t make much sense. I created this post to spark a discussion and hear different perspectives on data ownership. While I’ve shared some initial points, I’m more interested in learning what others think about this topic rather than expressing concerns. Please feel free to share your thoughts – as you already have.
I don’t exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.
Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity’s official docs.
The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.
It even talked about this very post on item 3 and on the second bullet point of the “Notable Posts” section.
However, when I ran the same prompt again (or similar prompts), it started hallucinating a lot of information. So, it seems like the answers are very hit or miss. Maybe that’s an issue that can be solved with some prompt engineering and as one’s account gets more established.
I think their documentation will help shed some light on this. Reading my edits will hopefully clarify that too. Either way, I always recommend reading their docs! :)
I understand that Perplexity employs various language models to handle queries and that the responses generated may not directly come from the training data used by these models; since a significant portion of the output comes from what it scraped from the web. However, a significant concern for some individuals is the potential for their posts to be scraped and also used to train AI models, hence my post.
I’m not anti AI, and, I see your point that transformers often dissociate the content from its creator. However, one could argue this doesn’t fully mitigate the concern. Even if the model can’t link the content back to the original author, it’s still using their data without explicit consent. The fact that LLMs might hallucinate or fail to attribute quotes accurately doesn’t resolve the potential plagiarism issue; instead, it highlights another problematic aspect of these models imo.
Yes, the platform in question is Perplexity AI, and it conducts web searches. When it performs a web search, it generally gathers and analyzes a substantial amount of data. This compiled information can be utilized in various ways, including creating profiles of specific individuals or users. The reason I bring this up is that some people might consider this a privacy concern.
I understand that Perplexity employs other language models to process queries and that the information it provides isn’t necessarily part of the training data used by these models. However, the primary concern for some people could be that their posts are being scraped (which raises a lot of privacy questions) and could also, potentially, be used to train AI models. Hence, the question.
Lmao
That would make sense…
Not really. All I did was ask it what it knew about llama@lemmy.dbzer0.com on Lemmy. It hallucinated a lot, thought. The answer was 5 to 6 items long, and the only one who was partially correct was the first one – it got the date wrong. But I never fed it any data.
Yeah, it hallucinated that part.
Don’t give me any ideas now >:)
I couldn’t agree more!
Oh, no. I don’t dislike it, but I also don’t have strong feelings about it. I’m just interested in hearing other people’s opinions; I believe that if something is public, then it is indeed public.
I think so too. And I tried to do my research before making this post, but I wasn’t able to find anyone bringing this issue up.
You can check Hugging Face’s website for specific requirements. I will warn you that lot of home machines don’t fit the minimum requirements for a lot of models available there. There is TinyLlama and it can run on most underpowered machines, but its functionalities are very limited and it would lack a lot as an everyday AI Chatbot. You can check my other comment too for other options.
The issue with that method, as you’ve noted, is that it prevents people with less powerful computers from running local LLMs. There are a few models that would be able to run on an underpowered machine, such as TinyLlama; but most users want a model that can do a plethora of tasks efficiently like ChatGPT can, I daresay. For people who have such hardware limitations, I believe the only option is relying on models that can be accessed online.
For that, I would recommend Mistral’s Mixtral models (https://chat.mistral.ai/) and the surfeit of models available on Poe AI’s platform (https://poe.com/). Particularly, I use Poe for interacting with the surprising diversity of Llama models they have available on the website.
I think that in that case, YouTube is your friend. There are a few pretty straight forward videos that can help you out; if you’re serious about it you’re going have to, eventually, become familiar with it.
Interesting question… I think it would be possible, yes. Poison the data, in a way.