Skip to content
This repository has been archived by the owner on Oct 23, 2019. It is now read-only.

New module, speccy.py #208

Open
wants to merge 43 commits into
base: gonzobot
Choose a base branch
from
Open
Changes from 11 commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
7f19215
Add speccy module
Feb 12, 2018
e4adf75
Add XTU and Reviver
Feb 12, 2018
ef2487f
Fixed SMART formatting
Feb 12, 2018
3ed293c
tons of shit
Jessexd Feb 13, 2018
f46eabb
Formatting fixes
Feb 13, 2018
cec4812
Merge branch 'speccy' into speccy
Jessexd Feb 13, 2018
31aae3d
An actual commit
Jessexd Feb 15, 2018
f2d3c28
Fixed
Jessexd Feb 15, 2018
4245df6
Merge branch 'speccy' of https://github.com/Jessexd/CloudBot into speccy
Jessexd Feb 15, 2018
07e8a30
Merge pull request #2 from Jessexd/speccy
Feb 15, 2018
efceb1d
greatly optmized, much less CPU ussage, reformatted as per PEP8 requi…
Jessexd Feb 16, 2018
42e876d
Merge pull request #3 from Jessexd/speccy
Feb 17, 2018
dc2480a
code readability
Jessexd Feb 17, 2018
48d1711
:O
Jessexd Feb 18, 2018
bafcc32
fixed due to reviewers requested changes
Jessexd Feb 19, 2018
ee495da
added GPU_RE to the be with the other globally defined regex
Jessexd Feb 19, 2018
fb17d50
made variable names less vague and generic, disregardded pep8 line le…
Jessexd Feb 19, 2018
c977e3c
removed useless r in GPU_RE regex since others weren't like that.
Jessexd Feb 19, 2018
d49eb27
Changed Bad: to Badware: (makes it less generic of what "bad" is)
Jessexd Feb 19, 2018
601e599
removed "# -*- coding: utf-8 -*-"
Jessexd Feb 20, 2018
9d2d695
removed repeatedly calling __getitem__ on the list. (for drivespec)
Jessexd Feb 20, 2018
cadcfa4
added back "# -*- coding: utf-8 -*-" because I'm a moron
Jessexd Feb 20, 2018
efa4758
Ooops, removed the cloudbot hook on accident, WOW
Jessexd Feb 20, 2018
bf3cb84
removal of cached speccy.py file
Jessexd Feb 20, 2018
1ca2b0b
Merge pull request #5 from Jessexd/speccy
Feb 20, 2018
dc5a781
Update requirements.txt
Jessexd Feb 22, 2018
60cefc9
Implemented checks so it doesn't error out the plugin, added undersco…
Jessexd Feb 28, 2018
55bcc6c
Added checking ability, shouldn't return nothing in the chat upon err…
Jessexd Mar 1, 2018
1a2d613
small confusing naming I had going
Jessexd Mar 1, 2018
ec039fc
Merge pull request #6 from Jessexd/speccy
Mar 1, 2018
a0baf00
Badware checking was broken, fixed
Jessexd Mar 1, 2018
6ee430c
oops
Jessexd Mar 1, 2018
c211e9a
issue with when badware not found, blabla
Jessexd Mar 4, 2018
4725b3f
smalll wooopsy from the drivespec code (duplicated 2 variables with 1…
Jessexd Mar 4, 2018
288f03d
hmm, didn't sync
Jessexd Mar 4, 2018
b10c6b0
stupid gedit auto-backup
Jessexd Mar 4, 2018
9487894
small change to smartcheck
Jessexd Mar 4, 2018
196a1b5
Typo
Jessexd Mar 4, 2018
1b73957
Overhauled smartcheck, re-ordered output format, little bit more reda…
Jessexd Mar 7, 2018
241abb2
Merge remote-tracking branch 'upstream/speccy' into speccy
Jessexd Mar 7, 2018
77123d9
Merge branch 'gonzobot' into speccy
linuxdaemon Mar 12, 2018
2dcd32a
Merge branch 'gonzobot' into speccy
linuxdaemon Mar 24, 2018
a739caa
Merge branch 'gonzobot' into speccy
linuxdaemon Jun 7, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
135 changes: 135 additions & 0 deletions plugins/speccy.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# -*- coding: utf-8 -*-
from cloudbot import hook
from cloudbot.event import EventType

import asyncio
import re
import requests
import tempfile

from bs4 import BeautifulSoup as BS


@asyncio.coroutine
@hook.event([EventType.message, EventType.action], singlethread=True)
def get_speccy_url(conn, message, chan, content, nick):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should use hook.regex() rather than re-implementing the functionality.

re_content = re.search(
r"https?:\/\/speccy.piriform.com\/results\/[A-z0-9]+", content)
if re_content:
return parse_speccy(message, nick, str(re_content.group(0)))


def parse_speccy(message, nick, url):

response = requests.get(url)
if not response:
return None

respHtml = response.content
speccy = tempfile.NamedTemporaryFile()

with open(speccy.name, 'wb') as f:
f.write(respHtml)

soup = BS(open(speccy.name), "lxml-xml")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use a file here? soup = BS(response.content, 'lxml-xml') would work just fine

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going to look into the other two requests, but about this one. This was actually an optimization improvement. If you know a better way lmk.

https://i.imgur.com/G2Si7PP.png

I noticed it also parses the link faster too:

https://i.imgur.com/R1eKAZT.png

test.py = the method you mentioned test2.py = the current method seen in snippet

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please include your test case. Also the time command shows system resource usage not actual elapsed time. Disk I/O has a lot of waiting since disk I/O is slow. This code is also running in the main thread as you have marked it as a coroutine, so any blockling I/O like an HTTP request or file write will halt the entire bot until it finishes.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't exactly know why it's faster, it was just a guess that ended up working, but here so you can try yourself:

https://gist.github.com/Jessexd/14990cbadad5e62a22fb590bb25a4f9d

Should I add multithreaded=True ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, you should remove the @asyncio.coroutine decorator on your hook function, which is marking it as a coroutine, which runs in the main event loop.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, alright, as for BS, what should be done in terms of parsing links faster instead of writing to disk?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally just pass the content from the response straight in.

import requests

from bs4 import BeautifulSoup

with requests.get(url) as response:
    soup = BeautifulSoup(response.content, 'lxml', from_encoding=response.encoding)

print(soup)

It may not be the fastest option, but it's the standard way as something like disk i/o can be extremely unpredictable.


try:
osspec = soup.body.find(
"div", text='Operating System').next_sibling.next_sibling.text
except AttributeError:
return "Invalid Speccy URL"

try:
ramspec = soup.body.find(
"div", text='RAM').next_sibling.next_sibling.text
except AttributeError:
ramspec = None

try:
cpuspec = soup.body.find(
"div", text='CPU').next_sibling.next_sibling.text
except AttributeError:
cpuspec = None

try:
gpufind = soup.body.find(
"div", text='Graphics').next_sibling.next_sibling.text
gpuspec = ""
for gpustring in re.finditer(
r".*(amd|radeon|intel|integrated|nvidia|geforce|gtx).*\n.*",
gpufind, re.IGNORECASE):
gpuspec += gpustring.group()
except AttributeError:
gpuspec = None

try:
picospec = soup.body.find("div", text=re.compile('.*pico', re.I)).text
except AttributeError:
picospec = None

try:
kmsspec = soup.body.find("div", text=re.compile('.*kms', re.I)).text
except AttributeError:
kmsspec = None

try:
boosterspec = soup.body.find(
"div", text=re.compile('.*booster', re.I)).text
except AttributeError:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be replaces with if soup.body.find(): whatever since .find() will return None when it can't find a matching element.

boosterspec = None

try:
reviverspec = soup.body.find(
"div", text=re.compile('.*reviver', re.I)).text
except AttributeError:
reviverspec = None

try:
killerspec = soup.body.find(
"div", text=re.compile('.*Killer.+Service', re.I)).text
except AttributeError:
killerspec = None

def smartcheck():
drivespec = soup.body.find_all("div", text="05")
number_of_drives = len(drivespec)

values = []
for i in range(0, number_of_drives):
z = drivespec[i].next_sibling.next_sibling.stripped_strings
saucy = list(z)
rv_index = saucy.index("Raw Value:")
raw_value = saucy[rv_index + 1]
if raw_value != "0000000000":
values.append(str(i + 1))
return values

try:
z = smartcheck()
if len(z) != 0:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To check if a list is empty, you should use the __bool__ value, don't check the length as that can be an expensive operation on a large list.
Something like: if z: instead of if len(z) != 0:

smartspec = " Disk:"
for item in z:
smartspec += " #" + item + " "
else:
smartspec = None
except Exception:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid broad exception clauses like this. Either log the returned exception in some way or make the except clause itself more precise. Do not consume errors that should be logged.

smartspec = None

badware_list = [picospec, kmsspec, boosterspec, reviverspec, killerspec]
badware = ', '.join(filter(None, badware_list))
if not badware:
badware = None

specin = "\x02OS:\x02 {}\
● \x02RAM:\x02 {}\
● \x02CPU:\x02 {}\
● \x02GPU:\x02 {}\
● \x02Badware:\x02 {}\
● \x02Failing Drive(s):\x02 {}\
".format(
osspec, ramspec, cpuspec, gpuspec, badware, smartspec)

specout = re.sub("\s{2,}|\r\n|\n", " ", specin)

return specout