Conseil
Cette procédure fait partie du cours qui vous apprend à créer un quickstart. Si vous ne l’avez pas déjà fait, consultez l’ introduction du cours.
Chaque procédure de ce cours s'appuie sur la précédente, alors assurez-vous d'avoir terminé la dernière procédure, envoyer un événement à partir de votre produit avant de continuer avec celle-ci.
Les logs sont générés par les applications. Ce sont des enregistrements de texte basés sur le temps qui aident votre utilisateur à voir ce qui se passe dans votre système.
New Relic vous propose une variété de moyens pour instrumenter votre application afin d'envoyer des logs à notre API de logs.
Dans cette leçon, vous apprenez à envoyer le log de votre produit à l'aide de notre kit de développement logiciel de télémétrie (SDK).
import osimport randomimport datetimefrom sys import getsizeof
from newrelic_telemetry_sdk import MetricClient, GaugeMetric, CountMetric, SummaryMetricfrom newrelic_telemetry_sdk import EventClient, Event
metric_client = MetricClient(os.environ["NEW_RELIC_LICENSE_KEY"])event_client = EventClient(os.environ["NEW_RELIC_LICENSE_KEY"])
db = {}stats = { "read_response_times": [], "read_errors": 0, "read_count": 0, "create_response_times": [], "create_errors": 0, "create_count": 0, "update_response_times": [], "update_errors": 0, "update_count": 0, "delete_response_times": [], "delete_errors": 0, "delete_count": 0, "cache_hit": 0,}last_push = { "read": datetime.datetime.now(), "create": datetime.datetime.now(), "update": datetime.datetime.now(), "delete": datetime.datetime.now(),}
def read(key):
print(f"Reading...")
if random.randint(0, 30) > 10: stats["cache_hit"] += 1
stats["read_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["read_errors"] += 1 stats["read_count"] += 1 try_send("read")
def create(key, value):
print(f"Writing...")
db[key] = value stats["create_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["create_errors"] += 1 stats["create_count"] += 1 try_send("create")
def update(key, value):
print(f"Updating...")
db[key] = value stats["update_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["update_errors"] += 1 stats["update_count"] += 1 try_send("update")
def delete(key):
print(f"Deleting...")
db.pop(key, None) stats["delete_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["delete_errors"] += 1 stats["delete_count"] += 1 try_send("delete")
def try_send(type_):
print("try_send")
now = datetime.datetime.now() interval_ms = (now - last_push[type_]).total_seconds() * 1000 if interval_ms >= 2000: send_metrics(type_, interval_ms) send_event(type_)
def send_metrics(type_, interval_ms): print("sending metrics...")
keys = GaugeMetric("fdb_keys", len(db)) db_size = GaugeMetric("fdb_size", getsizeof(db))
errors = CountMetric( name=f"fdb_{type_}_errors", value=stats[f"{type_}_errors"], interval_ms=interval_ms )
cache_hits = CountMetric( name=f"fdb_cache_hits", value=stats["cache_hit"], interval_ms=interval_ms )
response_times = stats[f"{type_}_response_times"] response_time_summary = SummaryMetric( f"fdb_{type_}_responses", count=len(response_times), min=min(response_times), max=max(response_times), sum=sum(response_times), interval_ms=interval_ms, )
batch = [keys, db_size, errors, cache_hits, response_time_summary] response = metric_client.send_batch(batch) response.raise_for_status() print("Sent metrics successfully!") clear(type_)
def send_event(type_):
print("sending event...")
count = Event( "fdb_method", {"method": type_} )
response = event_client.send_batch(count) response.raise_for_status() print("Event sent successfully!")
def clear(type_): stats[f"{type_}_response_times"] = [] stats[f"{type_}_errors"] = 0 stats["cache_hit"] = 0 stats[f"{type_}_count"] = 0 last_push[type_] = datetime.datetime.now()
Utilisez notre SDK
Nous proposons un open source SDK de télémétrie dans plusieurs langages de programmation les plus populaires. Ils envoient des données à nos API d'ingestion de données, y compris notre API de Log. Parmi ces SDK de langage, Python et Java fonctionnent avec l'API de Log.
Dans cette leçon, vous apprendrez à installer et à utiliser le SDK de télémétriePython pour envoyer le log à New Relic.
Accédez au répertoire send-logs/flashDB
du référentiel du cours.
$cd ../../send-events/flashDB
Si vous ne l'avez pas déjà fait, installez le package newrelic-telemetry-sdk
.
$pip install newrelic-telemetry-sdk
Ouvrez le fichier db.py
dans l'IDE de votre choix et configurez le LogClient
.
import osimport randomimport datetimefrom sys import getsizeof
from newrelic_telemetry_sdk import MetricClient, GaugeMetric, CountMetric, SummaryMetricfrom newrelic_telemetry_sdk import EventClient, Eventfrom newrelic_telemetry_sdk import LogClient
metric_client = MetricClient(os.environ["NEW_RELIC_LICENSE_KEY"])event_client = EventClient(os.environ["NEW_RELIC_LICENSE_KEY"])log_client = LogClient(os.environ["NEW_RELIC_LICENSE_KEY"])
db = {}stats = { "read_response_times": [], "read_errors": 0, "read_count": 0, "create_response_times": [], "create_errors": 0, "create_count": 0, "update_response_times": [], "update_errors": 0, "update_count": 0, "delete_response_times": [], "delete_errors": 0, "delete_count": 0, "cache_hit": 0,}last_push = { "read": datetime.datetime.now(), "create": datetime.datetime.now(), "update": datetime.datetime.now(), "delete": datetime.datetime.now(),}
def read(key):
print(f"Reading...")
if random.randint(0, 30) > 10: stats["cache_hit"] += 1
stats["read_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["read_errors"] += 1 stats["read_count"] += 1 try_send("read")
def create(key, value):
print(f"Writing...")
db[key] = value stats["create_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["create_errors"] += 1 stats["create_count"] += 1 try_send("create")
def update(key, value):
print(f"Updating...")
db[key] = value stats["update_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["update_errors"] += 1 stats["update_count"] += 1 try_send("update")
def delete(key):
print(f"Deleting...")
db.pop(key, None) stats["delete_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["delete_errors"] += 1 stats["delete_count"] += 1 try_send("delete")
def try_send(type_):
print("try_send")
now = datetime.datetime.now() interval_ms = (now - last_push[type_]).total_seconds() * 1000 if interval_ms >= 2000: send_metrics(type_, interval_ms) send_event(type_)
def send_metrics(type_, interval_ms):
print("sending metrics...")
keys = GaugeMetric("fdb_keys", len(db)) db_size = GaugeMetric("fdb_size", getsizeof(db))
errors = CountMetric( name=f"fdb_{type_}_errors", value=stats[f"{type_}_errors"], interval_ms=interval_ms )
cache_hits = CountMetric( name=f"fdb_cache_hits", value=stats["cache_hit"], interval_ms=interval_ms )
response_times = stats[f"{type_}_response_times"] response_time_summary = SummaryMetric( f"fdb_{type_}_responses", count=len(response_times), min=min(response_times), max=max(response_times), sum=sum(response_times), interval_ms=interval_ms, )
batch = [keys, db_size, errors, cache_hits, response_time_summary] response = metric_client.send_batch(batch) response.raise_for_status() print("Sent metrics successfully!") clear(type_)
def send_event(type_):
print("sending event...")
count = Event( "fdb_method", {"method": type_} )
response = event_client.send_batch(count) response.raise_for_status() print("Event sent successfully!")
def clear(type_): stats[f"{type_}_response_times"] = [] stats[f"{type_}_errors"] = 0 stats["cache_hit"] = 0 stats[f"{type_}_count"] = 0 last_push[type_] = datetime.datetime.now()
Important
Cet exemple attend une variable d’environnement appelée $NEW_RELIC_LICENSE_KEY
.
instrumentez votre application pour envoyer le log à New Relic.
import osimport randomimport datetimefrom sys import getsizeofimport psutil
from newrelic_telemetry_sdk import MetricClient, GaugeMetric, CountMetric, SummaryMetricfrom newrelic_telemetry_sdk import EventClient, Eventfrom newrelic_telemetry_sdk import LogClient, Log
metric_client = MetricClient(os.environ["NEW_RELIC_LICENSE_KEY"])event_client = EventClient(os.environ["NEW_RELIC_LICENSE_KEY"])log_client = LogClient(os.environ["NEW_RELIC_LICENSE_KEY"])
db = {}stats = { "read_response_times": [], "read_errors": 0, "read_count": 0, "create_response_times": [], "create_errors": 0, "create_count": 0, "update_response_times": [], "update_errors": 0, "update_count": 0, "delete_response_times": [], "delete_errors": 0, "delete_count": 0, "cache_hit": 0,}last_push = { "read": datetime.datetime.now(), "create": datetime.datetime.now(), "update": datetime.datetime.now(), "delete": datetime.datetime.now(),}
def read(key):
print(f"Reading...")
if random.randint(0, 30) > 10: stats["cache_hit"] += 1
stats["read_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["read_errors"] += 1 stats["read_count"] += 1 try_send("read")
def create(key, value):
print(f"Writing...")
db[key] = value stats["create_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["create_errors"] += 1 stats["create_count"] += 1 try_send("create")
def update(key, value):
print(f"Updating...")
db[key] = value stats["update_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["update_errors"] += 1 stats["update_count"] += 1 try_send("update")
def delete(key):
print(f"Deleting...")
db.pop(key, None) stats["delete_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["delete_errors"] += 1 stats["delete_count"] += 1 try_send("delete")
def try_send(type_):
print("try_send")
now = datetime.datetime.now() interval_ms = (now - last_push[type_]).total_seconds() * 1000 if interval_ms >= 2000: send_metrics(type_, interval_ms) send_event(type_)
def send_metrics(type_, interval_ms):
print("sending metrics...")
keys = GaugeMetric("fdb_keys", len(db)) db_size = GaugeMetric("fdb_size", getsizeof(db))
errors = CountMetric( name=f"fdb_{type_}_errors", value=stats[f"{type_}_errors"], interval_ms=interval_ms )
cache_hits = CountMetric( name=f"fdb_cache_hits", value=stats["cache_hit"], interval_ms=interval_ms )
response_times = stats[f"{type_}_response_times"] response_time_summary = SummaryMetric( f"fdb_{type_}_responses", count=len(response_times), min=min(response_times), max=max(response_times), sum=sum(response_times), interval_ms=interval_ms, )
batch = [keys, db_size, errors, cache_hits, response_time_summary] response = metric_client.send_batch(batch) response.raise_for_status() print("Sent metrics successfully!") clear(type_)
def send_event(type_):
print("sending event...")
count = Event( "fdb_method", {"method": type_} )
response = event_client.send_batch(count) response.raise_for_status() print("Event sent successfully!")
def send_logs():
print("sending log...")
process = psutil.Process(os.getpid()) memory_usage = process.memory_percent()
log = Log("FlashDB is using " + str(round(memory_usage * 100, 2)) + "% memory")
response = log_client.send(log) response.raise_for_status() print("Log sent successfully!")
def clear(type_): stats[f"{type_}_response_times"] = [] stats[f"{type_}_errors"] = 0 stats["cache_hit"] = 0 stats[f"{type_}_count"] = 0 last_push[type_] = datetime.datetime.now()
Ici, vous instrumentez votre plateforme pour envoyer memory_usage
en tant log à New Relic.
Modifiez le module try_send
pour envoyer le log toutes les 2 secondes.
import osimport randomimport datetimefrom sys import getsizeofimport psutil
from newrelic_telemetry_sdk import MetricClient, GaugeMetric, CountMetric, SummaryMetricfrom newrelic_telemetry_sdk import EventClient, Eventfrom newrelic_telemetry_sdk import LogClient, Log
metric_client = MetricClient(os.environ["NEW_RELIC_LICENSE_KEY"])event_client = EventClient(os.environ["NEW_RELIC_LICENSE_KEY"])log_client = LogClient(os.environ["NEW_RELIC_LICENSE_KEY"])
db = {}stats = { "read_response_times": [], "read_errors": 0, "read_count": 0, "create_response_times": [], "create_errors": 0, "create_count": 0, "update_response_times": [], "update_errors": 0, "update_count": 0, "delete_response_times": [], "delete_errors": 0, "delete_count": 0, "cache_hit": 0,}last_push = { "read": datetime.datetime.now(), "create": datetime.datetime.now(), "update": datetime.datetime.now(), "delete": datetime.datetime.now(),}
def read(key):
print(f"Reading...")
if random.randint(0, 30) > 10: stats["cache_hit"] += 1
stats["read_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["read_errors"] += 1 stats["read_count"] += 1 try_send("read")
def create(key, value):
print(f"Writing...")
db[key] = value stats["create_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["create_errors"] += 1 stats["create_count"] += 1 try_send("create")
def update(key, value):
print(f"Updating...")
db[key] = value stats["update_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["update_errors"] += 1 stats["update_count"] += 1 try_send("update")
def delete(key):
print(f"Deleting...")
db.pop(key, None) stats["delete_response_times"].append(random.uniform(0.5, 1.0)) if random.choice([True, False]): stats["delete_errors"] += 1 stats["delete_count"] += 1 try_send("delete")
def try_send(type_):
print("try_send")
now = datetime.datetime.now() interval_ms = (now - last_push[type_]).total_seconds() * 1000 if interval_ms >= 2000: send_metrics(type_, interval_ms) send_event(type_) send_logs()
def send_metrics(type_, interval_ms):
print("sending metrics...")
keys = GaugeMetric("fdb_keys", len(db)) db_size = GaugeMetric("fdb_size", getsizeof(db))
errors = CountMetric( name=f"fdb_{type_}_errors", value=stats[f"{type_}_errors"], interval_ms=interval_ms )
cache_hits = CountMetric( name=f"fdb_cache_hits", value=stats["cache_hit"], interval_ms=interval_ms )
response_times = stats[f"{type_}_response_times"] response_time_summary = SummaryMetric( f"fdb_{type_}_responses", count=len(response_times), min=min(response_times), max=max(response_times), sum=sum(response_times), interval_ms=interval_ms, )
batch = [keys, db_size, errors, cache_hits, response_time_summary] response = metric_client.send_batch(batch) response.raise_for_status() print("Sent metrics successfully!") clear(type_)
def send_event(type_):
print("sending event...")
count = Event( "fdb_method", {"method": type_} )
response = event_client.send_batch(count) response.raise_for_status() print("Event sent successfully!")
def send_logs():
print("sending log...")
process = psutil.Process(os.getpid()) memory_usage = process.memory_percent()
log = Log("FlashDB is using " + str(round(memory_usage * 100, 2)) + "% memory")
response = log_client.send(log) response.raise_for_status() print("Log sent successfully!")
def clear(type_): stats[f"{type_}_response_times"] = [] stats[f"{type_}_errors"] = 0 stats["cache_hit"] = 0 stats[f"{type_}_count"] = 0 last_push[type_] = datetime.datetime.now()
Votre plateforme signalera désormais le log configuré toutes les 2 secondes.
Accédez à la racine de votre application à build-a-quickstart-lab/send-logs/flashDB
.
Exécutez vos services pour vérifier qu’ils génèrent un log.
$python simulator.pyWriting...try_sendReading...try_sendReading...try_sendWriting...try_sendWriting...try_sendReading...sending metrics...Sent metrics successfully!sending event...Event sent successfully!sending log...Log sent successfully!
Options alternatives
Si le SDK de langue ne répond pas à vos besoins, essayez l'une de nos autres options :
New Relic propose une variété de solutions de transfert de log qui vous permettent de collecter les logs du système d'exploitation, de la plateforme cloud , notamment Amazon AWS, [Google Cloud Platform], Microsoft Azure et Heroku, Kubernetes, Docker et APM.
Implémentation manuelle : si les options précédentes ne correspondent pas à vos besoins, vous pouvez toujours instrumenter manuellement votre propre bibliothèque pour effectuer une requête POST à l'API New Relic Logs.
Dans cette procédure, vous avez instrumenté votre service pour envoyer le log à New Relic. Ensuite, instrumentez-le pour envoyer une trace.
Conseil
Cette procédure fait partie du cours qui vous apprend à créer un quickstart. Passez à la leçon suivante, envoyez la trace de votre produit.